In embedded design, what is the actual overhead of using a linux os vs programming directly against the cpu? [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I understand that the answer to this question, like most, is "it depends", but what I am looking for is not so much an answer as much as a rationale for the different things affecting the decision.
My use case is that I have an ARM Cortex A8 (TI AM335x) running an embedded device. My options are to use some embedded linux to take advantage of some prebuilt drivers and other things to make development faster, but my biggest concern for this project is the speed of the device. Memory and disk space are not much of a concern. I think it is a safe assumption that programming directly against the mpu and not using a full OS would certainly make the application faster, but gaining a 1 or 2 percent speedup is not worth the extra development time.
I imagine that the largest slowdowns are going to come from the kernel context switching and memory mapping but I do not have the knowledge to correctly assess or gauge the extent of those slowdowns. Any guidance would be greatly appreciated!

Your concerns are reasonable. Going bare metal can/will improve performance but it may only be a few percent improvement..."it depends".
Going bare metal for something that has fully functional drivers in linux but no fully functional drivers bare metal, will cost you development and possibly maintenance time, is it worth that to get the performance gain?
You have to ask yourself as well am I using the right platform, and/or am I using the right approach for whatever it is you want to do on that processor that you think or know is too slow. Are you sure you know where the bottleneck is? Are you sure your optimization is in the right place?
You have not provided any info that would give us a gut feel, so you have to go on your gut feel as to what path to take. A different embedded platform (pros and cons), bare metal or operating system. Linux or rtos or other. One programming language vs another, one peripheral vs another, and so on and so on. You wont actually know until you try each of these paths, but that can be and likely is cost and time prohibitive...
As far as the generic title question of os vs bare metal, the answer is "it depends". The differences can swing widely, from almost the same to hundreds to thousands of times faster on bare metal. But for any particular application/task/algorithm...it depends.

Related

Are Mainframe systems replacable? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm actually working as a Developper on the mainframe field. Reading many documentations I understand that the real power of such systems is that they can treat many transactions (input/output) operation at the same time. They're also needed to keep high performances.
So I was wondering, aren't the modern systems capable of performing the same or even better?
To correct some mis-understandings.
Mainframe hardware is not "old" -- it has had continuous development
and undergone a refresh cycle every two or three years. The chippery involved
is in some ways more advanced than x86 -- things like have a spare cpu on each
chip -- most of the differences are aimed at reliability and availability rather
than raw performance.
Having said that both manufacturers are moving the same electrons around on the same silicon so actual per CPU performance is much the same.
Likewise mainframe software comes in two varieties "ancient" and "modern".
Some software like "CICS" was first developed in the 1970s and although it
is actively maintained it still has some of the original code.
Some software (IEBCOPY we are looking at you) was developed in the 1960s and was considered terrible even then has been left untouched for decades.
However zOS also runs a fully POSIX compliant UNIX shell in which you can run any compliant J2EE application or compile any C/C++ program to run in.
While a well set up x86 environment can match the raw processing power, they fall slightly behind when it comes to reliability and availability.
The main reason why so many large corporations stick with the mainframe is the large body of bespoke software written for COBOL/CICS, PL/1-IMS environments at a
time when hardware was expensive and coding efficiency was at a premium.
So you could re-write an old COBOL/CICS application in Java/J2EE, but, you
would need about five times the raw processing power for the new system,
always assuming you could work out what business rules and logic was
embedded in the older system.
There are many factors involved in choosing a platform. The fact is that existing mainframes (generally IBM z/OS systems is what is implied) have a massive amount of existing programs, business processes, disaster recovery plans, etc. that would all need to be refactored. Your talking about migrating existing applications based on runtimes that do not exist on other platforms. Not to mention that massive amount of data that exists both transactionally and historically.
For instance, Customer Interactive Control System (CICS) uses a specific API called CICS EXEC where program calls, database interactions, internal programming facilities like queues exist. All of these programs need to be re-written, ported and established by moving the programs, processes and data to new platforms. Its rewriting 50 years of a business' investment.
This is inherently risky to a business. You are disrupting existing operations and intellectual property and data to gain what? The cost of any such move is massive and risky for what benefit? It ends up being risk / reward.
Bear in mind, that there is a new legacy built on Windows and Linux that will likely be "disrupted" in the future and its not likely that one would move all those applications for the same reasons.
As #james pointed out, mainframes are close to, if not currently, the fastest single general computing platforms out there. New hardware versions come out every two years and software is always being added to the platform, Java, Node, etc. The platform continues to evolve.
Its a complicated subject and not as simple as "use other technology" to perform the same or better. Its moving the programs, data and processes, which is really the hard part.
"performing better" is highly unlikely because the mainframe segment is still highly relevant in business and its architectures are kept closely up-to-date with all evolutions both in hardware and [system] software.
"performing the same" is theoretically certainly possible, but it must be noted that some of the more popular mainframe architectures have significantly different hardware setups, e.g. the processors in z/OS systems are, comparatively, pathetically slow, but they delegate lots and lots of work to coprocessors, and it must also be noted that on the software side, mainframers tend to have a higher degree of "resource-awareness" than, eurhm, more "modern" developers.
That said, of course any answers to this question will necessarily be more opinion than hard-facts based, which makes it an unfortunate thing to ask here.

Is there a fundamental flaw in operating system structure? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
At some point I read that operating systems were meant to be created in a certain way (i.e. 'microkernel') to be resistant to fault but are made in another way (i.e. 'monolithic') for practical purposes such as speed. Whilst this is not the question, it does bring up the question:
Have any fundamental tradeoffs been made in computer architecture that have reduced security from the earlier theoretical systems?
I'm looking for answers that are accepted in the field of computer science, not opinions on current implementations. For example programs could run faster if they were all built on custom hardware, this is known, but this is impractical which is why we have general computers.
Anyone saying that microkernels are "the right way" is wrong. There is no "right way". There is no objectively best approach to security.
The problem is that security is not fundamental to computing. It's a sociological issue in a lot of ways, it only exists because humans exist - unlike computation, which is a literal science.
That said, there are principals of security that have held true and been incorporated into hardware and software, like the principal of least privilege. The kernel in an operating systems, on the hardware, runs at a higher privilege level than userland processes. That's why your program can't actually interact with hardware, and has to use system calls to do so.
There are also issues of complexity, and various measurements of complexity. Programs tend to get more complex as our needs grow - instead of a basic game of pong we now have 1,000 AI units on a giant map. Complexity goes up, and our ability to reason about the program will likely go down, opening up holes for vulnerabilities.
There's no real answer to your question but this - if there is an objective method for security we haven't discovered it yet.
SECURITY IS NOT A FUNCTION OF NATURE OF KERNEL.
Types of kernels has nothing to do with the security of the operating system. Though I would agree that the efficiecny of an operating system does depend on it's nature.
Monolithic kernel is a single large processes running entirely in a single address space. It is a single static binary file. Whereas, in Microkernels, the kernel is broken down into separate processes, known as servers. Some of the servers run in kernel space and some run in user-space. All servers are kept separate and run in different address spaces.The communication in microkernels is done via message passing.
Developers seem to prefer micro-kernels where as it provides flexibility and also it is more easy to work with different userspaces. Monolithic is somewhat complex in it's nature and is beneficial for lightweight systems.
Is their some fundamentally flawed way our computers are structured
that allow all the security holes that are found? What I mean by this,
is that there are sometimes the proper theoretical ways to do things
in computer science that satisfy all our requirements and are robust,
etc, .
Their are certain concepts like protection-ring and capability based security and all,but, at the end this depends on the requirements of the system. For more clarity be sure to visit the links provided. SOmewhat minor ideas are highlighted below.
Capability-based_security :- Although most operating systems implement a facility which resembles capabilities, they typically do not provide enough support to allow for the exchange of capabilities among possibly mutually untrusting entities to be the primary means of granting and distributing access rights throughout the system.
Protection_ring :- Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchical levels or layers of privilege within the architecture of a computer system. This is generally hardware-enforced by some CPU architectures that provide different CPU modes at the hardware or microcode level. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number).

Why don't hardware failures show up at the programming language level? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am wondering if anyone can give my a good answer, or at least point me in the direction of a good reference to the following question:
How come I have never heard of a computer breaking in a very fundamental way? How come when I declare x to be a double it stays as a double? How come there is never a short circuit that robs it of some bytes and makes it an integer? Why do we have faith that when we initialize x to 10, there will never be a power surge that will cause it to become 11, or something similar?
I think I need a better understanding of memory.
Thanks, and please don't bash me over the head for such a simple/abstract question.
AFAIK, the biggest such problem is cosmic background radiation. If a gamma ray hits your memory chip, it can randomly flip a memory bit. This does happen even in your computer every now and then. It usually doesn't cause any problem, since it is unlikely that it is the very bit in your Excel input field, for example, and magnetic drives are protected against such accidents. However, it is a problem for long, large calculations. That's what ECC memory is invented for. You can also find more info about this phenomenon here:
http://en.wikipedia.org/wiki/ECC_memory
"The actual error rate found was several orders of magnitude higher than previous small-scale or laboratory studies, with 25,000 to 70,000 errors per billion device hours per megabit (about 2.5–7 × 10−11 error/bit·h)(i.e. about 5 single bit errors in 8 Gigabytes of RAM per hour using the top-end error rate), and more than 8% of DIMM memory modules affected by errors per year."
How come I have never heard of a computer breaking in a very fundamental way?
Hardware is fantastically complicated and there are a huge number of engineers whose job it is to make sure that the hardware works as intended. Whenever Intel, AMD, etc. release chips, they've extensively tested the design and run all sorts of diagnostics before it leaves the plant. They have an economic incentive to do this: if there's a mistake somewhere, it can be extremely costly. Look at the Intel FDIV bug for an example.
How come when I declare x to be a double it stays as a double? How come there is never a short circuit that robs it of some bytes and makes it an integer?
Part of this has to do with how the assembly works. Typically, compiled application binaries don't have any type information in them. Instead, they just issue commands like "take the four bytes at position 0x243598F0 and load them into a register." For a variable's type to mutate somehow, a huge amount of application code would have to change. If there was an error that underallocated the space for the variable, it would mess up the stack layout and probably cause a pretty quick program crash, so chances are the result would be "it crashed" rather than "the type got mutated," especially since at a binary level the operations on doubles and integral types are so different.
Why do we have faith that when we initialize x to 10, there will never be a power surge that will cause it to become 11, or something similar?
There might be! It's extremely rare, though, because the hardware people do such a good job designing everything. One of the nifty things about being a software engineer is that you sit on top of the food chain:
Software engineers write software that runs in an operating system,
which was written by systems programmers and talks to the hardware,
which was designed by electrical engineers and is built out of hardware gates,
which were fabricated and designed by materials engineers,
who got their materials due to the efforts of mining engineers,
etc.
Lots of engineers make a good living at each link in the chain, which is why everything is so well-tested. Errors do occur, and they do take down real computer systems, but it's relatively rare unless you have thousands or millions of computers running.
Hope this helps!
Answer 1, most of us rarely or never work on a large enough system or one where this type of consideration is needed. In large databases or in file systems, they have error detection to notice what you describe present. In physically or data large systems, errors when writing or storing data happen (ex,. packets get lost or corrupted mid-journey, gamma rays hit). Sectors go bad in your hard drive all the time. We have hashing, parity checking, and a host of other methods to notify us when funny issues happen.
Answer 2: Our axioms & models. The models we tend to use, the Imperative or Functional model, doesn't have the 'gamma ray from the sun changed a bit' present as a consideration. Just like how an environmental scientist may abstract away quarks when studying environmental changes, we abstract away hardware.
Edit #X:
This is a great question. I actually heard this recently, by accident. In Physics, their models are wrong. Dead wrong. And they know they are wrong but they use them anyway. When I said this to justify my distain for the subject, the CS technician at my school verbally back-handed me. He basically said what you said, how do we know 'int x=10' isn't 11 randomly later?

How do viruses function from a programming point of view? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have always been fascinated by computer viruses. For years I have tired to learn about them, but due to their nature people are unwilling to give many details.
For what it is worth I'm not a hacker and am not trying to build a virus.
If anyone is willing to answer this question I want to know what makes a virus a virus and how they are different from spyware.
How can they install themselves onto a computer without you noticing?
And how do worms work? How can a program replicate and move on its own? Does it contain its source code within it? And does it interface with other programs or just assess the hardware directly to spread ?
EDIT: What language would they be written in? Would you use assembly/C++ types of languages or create them as scripts in lua?
Well, a worm is simply a self-replicating piece of software. Imagine a program that copies its executeable over some link to another computer and launches it there. That's not that much magic.
A virus is simply a worm which infects other executeables, i.e. it does not replicate its own image, but it "backpacks" it to a different application's image and uses that application's execution flow to get initiated.
The user does not notice anything if there are no side-effects, and no UI interaction.
If the user is a technically more competent than the average end-user, this is very hard to achieve. Some malwares host the target system in a virtual machine so you as the user have a hard time to see anything suspicious as long as you don't figure you look at a virtual machine. Like Neo, awaking from the Matrix.
As there is no limit to what you can implement in what language, there is no language of choice. Naturally, a low-level and natively-compiled language is more versatile to do what a virus/worm must do to stay low-profile. However, there are worms and viruses written in assembly language, Basic, C, Delphi, JavaScript, whatever -- there is nothing you can not imagine here.
Spyware has similar requirements, but different goals. While a virus, and a worm, usually spreads around, either for no reason or to drop some kind of payload at some point, spyware wants to either "phone home" or open the target system so it can be attacked, i.e. inspected, easier, usually in order to get hold of a victim's data that is secret, personal, or otherwise interesting.
Hope this quick answer helps a bit. You can google more details easily at bing :)

How important for programming skills is to have nice gadgets? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
This question was asked by Ed Burns in his book 'Riding the Crest'. I remember that almost all of the rock star programmers found helpful if one had new and kool gadget. Programmer stays in touch with the latest design, hardware and software implementation which may affect also his work.
What is your opinion on this question?
New gadgets are useful if they expand your horizon.
For example, i recently got myself an iPod touch; this has deeply changed my appreciation for touch-screen user interfaces -- i only knew "point of sales" touchscreen interfaces, which are usually horrible.
I believe it is fairly irrelevant.
Firstly, every domain (for example Web, OS X, iPhone, Windows) has its own aesthetics which means experience from gadgets won't necessarily transfer that well, in the same way a great Windows UI won't necessarily be a great OS X interface.
And owning a gadget hardly ever teaches about the underlying hardware or software implementation.
However, being able to appreciate great design, whereever it appears, whether that is in gadgets, literature or architecture has to be useful. And a curiosity about the world and a determination for life to be better will probably often lead to great programmers getting gadgets, however this is a case or correlation not being the same as causation. The gadgets don't help the programming skills, but the same traits drive both.
I think what Burns might be getting at their is exposure to other design paradigms. If you are programming in Windows and you get the latest and greatest WinMo phone, you're exposed to a different platform but really it's just a baby Windows. Contrast that with being a Windows programmer and getting an iPhone or a G1. You're being shown a very different way to get things done and you'll be able to pick up the parts you like out of someone else's vision.
There's a competitive aspect to many fields that software is often lacking. Competition helps you by showing you how other people solved the problem that you're looking at. If they are selling like gangbusters and you aren't, well, something's up there huh?
Gadgets aren't so important, the PC itself is. Having a fairly new PC, with a nice screen, keyboard and mouse is a must. You are using them most of the day after all, so no point spending loads on the PC and getting cheap peripherals!
For me it's all about keeping things interesting, as I can get bored working on the same thing over and over.
Having a new gadget gives you something new to play with, thus increasing enthusiasm and helping to pick up new things, in turn making you a better developer.
I guess not everyone needs that motivation, but I find it can help during a lull, and it doesn't even need to be new hardware, I'm just as happy to pick up a new bit of technology / language etc, I find it has the same effect.
I'm not a big fan of all the gadget craze. I always try to stay current with new tecnologies but I don't think that consuming gadgets has anything to do with it.
Cool gadgets are a good excuse to spend money and increase your cool factor.
Depends on the programmer. Many programmers would be happy with cool gadgets as a job perk, but I wouldn't say it affects their productivity directly. If I had to choose, I'd rather get a good chair than a palmtop of the same price.
Things I've missed while working as a programmer in various companies of all sizes:
A decent chair (jesus people)
A good, fast computer (even if they don't work 3D)
A large screen (two if possible)
A hand-held device capable of reading mail (I suppose this would fit as a 'gadget')
Depends what you're working on. I'd say that if you're doing UI work, have lots of diverse UIs to play with. Make sure they have a Mac and a PC, maybe one or two different kinds of smartphones and/or a PDA -- if you're that kind of company, maybe even a Nintendo Wii in the breakroom.
If I can program on the gadget sure.
I get considerable less(for programming) if I don't get to program [on] it.
It's a self-image maintenance thing. Having the latest geekbling helps make one feel like the sort of wired.com poster boy who's on top of all the trends, which motivates one to keep on top of the trends.
Really, almost anything you see people doing that seems somewhat inexplicable is probably an identity maintenance activity.

Resources