How does an emulator work like the android one. And a virtual pc live VirtualBox?
In one form the software reads the binary in the same way that hardware on the real system would, it fetches instructions decodes them and executes them using variables in a program instead of registers in hardware. Memory and other I/O is similarly emulated/simulated. To be interesting beyond just an instruction set simulator it needs to also simulate hardware, so it may have software that pretends to behave for example like a VGA video card the software run on the emulator ideally cannot tell the Memory/I/O is from simulated hardware, ideally you do enough to fool the software being simulated. Also though you try to honor what those register writes and reads mean by making calls to the operating system you are running on and/or hardware directly (assuming your program of course thinks it is talking to hardware and not an emulator).
The next level up would be a virtual machine. For the case I am describing it is a matching instruction set, so you are say wanting to virtualize an x86 program on an x86 host machine. The long and short of it is the host processor/machine has hardware features that allow you to run the actual instructions of the program being virtualized. so long as the instructions are simple register based or stack or other local memory, once the program ventures out of its memory space the virtualization hardware will interrupt the operating system, the virtual machine software like vmware or virtualbox then examines the Memory or I/O request from the software being virtualized and then determines if that was a video card request or usb device or nic or whatever, and then it emulates the device in question much in the same way that a pure non-virtualized setup would. A virtual machine can often outrun a purely emulated machine because it allows a percentage of the software to run at the full speed of the processor. the downside is you have to have a virtual machine that matches the software being run. An emulator can be far more accurate and portable than a virtual machine at the cost of performance.
The next level up would be something like wine or cygwin where not only are you trying to do something like a virtual machine and run native instructions and trap memory requests but you are going beyond that and trying to trap operating system calls so that you can run a program compiled for one operating system on another operating system, but much faster than a virtual machine. Instead of traping the hardware level register or memory access to a video card, you trap the operating system call for a bitblt or fill or line draw or string draw with a specific font, etc. Then you can translate that operating system request with calls to the native operating system.
At their simplest emulators or virtual computers provide an abstraction layer built on top of the host system (the actual physical system running the emulator) that implements the emulated system's functionality to the code that is to be run.
Emulators and virtual machines simulate hardware like a PC or an android phone. A virtual machine (or virtual pc) looks at an operating system's machine code instructions and runs them on top of your current (host) operating system in a virtual computer.
http://en.wikipedia.org/wiki/Virtual_machine
and
http://en.wikipedia.org/wiki/Emulator
Depending on the type or virtualization, a virtual machine is not always an emulator.
Related
I really like the idea of running, optimizing my software on old hardware, because you can viscerally feel when things are slower (or faster!). The most obvious way to do this is to buy an old system and literally use it for development, but that would allow down my IDE, and compiler and all other development tasks, which is less helpful, and (possibly) unnecessary.
I want to be able to:
Run my application at various levels of performance, on demand
At the same time, run my IDE, debugger, compiler at full speed
On a single system
Nice to have:
Simulate real, specific old systems, with some accuracy
Similarly throttle memory speed, and size
Optionally run my build system slowly
Try use QEMU in full emulation mode, but keep in mind it's use more cpu resources.
https://stuff.mit.edu/afs/sipb/project/phone-project/OldFiles/share/doc/qemu/qemu-doc.html
QEMU has two operating modes:
Full system emulation. In this mode, QEMU emulates a full system (for example a PC), including one or several processors and various peripherals. It can be used to launch different Operating Systems without rebooting the PC or to debug system code.
User mode emulation (Linux host only). In this mode, QEMU can launch Linux processes compiled for one CPU on another CPU.
Possible architectures can see there:
https://wiki.qemu.org/Documentation/Platforms
So, from my understanding, there are two types of programs, those that are interpreted and those that are compiled. Interpreted programs are executed by an interpreter that is a native application for the platform its on, and compiled programs are themselves native applications (or system software) for the platform they are on.
But my question is this: is anything besides the kernel actually being directly run by the CPU? A Windows Executable is a "Windows Executable", not an x86 or amd64 executable. Does that mean every other process that's not the kernel is literally being interpreted by the kernel in the same way that a browser interprets Javascript? Or is the kernel placing these processes on the "bare metal" that the kernel sits on top of?
IF they're on the "bare metal", how, say does Windows know that a program is a windows program and not a Linux program, since they're both compiled for amd64 processors? If it's because of the "format" of the executable, how is that executable able to run on the "bare metal", since, to me, the fact that it's formatted to run on a particular OS would mean that some interpretation would be required for it to run.
Is this question too complicated for Stack Overflow?
They run on the "bare metal", but they do contain operating system-specific things. An executable file will typically provide some instructions to the kernel (which are, arguably, "interpreted") as to how the program should be loaded into memory, and the file's code will provide ways for it to "hook" in to the running operating system, such as by an operating system's API or via device drivers. Once such a non-interpreted program is loaded into memory, it runs on the bare metal but continues to communicate with the operating system, which is also running on the bare metal.
In the days of single-process operating systems, it was common for executables to essentially "seize" control of the entire computer and communicate with hardware directly. Computers like the Apple ][ and the Commodore 64 work like that. In a modern multitasking operating system like Windows or Linux, applications and the operating system share use of the CPU via a complex multitasking arrangement, and applications access the hardware via a set of abstractions built in to the operating system's API and its device drivers. Take a course in Operating System design if you are interested in learning lots of details.
Bouncing off Junaid's answer, the way that the kernel blocks a program from doing something "funny" is by controlling the allocation and usage of memory. The kernel requires that memory be requested and accessed through it via its API, and thus protects the computer from "unauthorized" access. In the days of single-process operating systems, applications had much more freedom to access memory and other things directly, without involving the operating system. An application running on an old Apple ][ can read to or write to any address in RAM that it wants to on the entire computer.
One of the reasons why a compiled application won't just "run" on another operating system is that these "hooks" are different for different operating systems. For example, an application that knows how to request the allocation of RAM from Windows might not have any idea how to request it from Linux or the Mac OS. As Disk Crasher mentioned, these low level access instructions are inserted by the compiler.
I think you are confusing things. A compiled program is in machine readable format. When you run the program, kernel will allocate memory, cpu etc and ensure that the program does not interfere with other programs. If the program requires access to HW resources or disk etc, the kernel will handle it so kernel will always be between hardware and any software you run in user space.
If the program is interpreted, then a relevant interpreter for that language will convert the code to machine readable on the fly and kernel will still provide the same functionality like access to hardware and making sure programs aren't doing anything funny like trying to access other program memory etc.
The only thing that runs on "bare metal" is assembly language code, which is abstracted from the programmer by many layers in the OS and compiler. Generally speaking, applications are compiled to an OS and CPU architecture. They will not run on other OS's, at least not without a compatible framework in place (e.g. Mono on Linux).
Back in the day a lot of code used to be written on bare metal using macro assemblers, but that's pretty much unheard of on PCs today. (And there was even a time before macro assemblers.)
Normally MMU-less systems don't have MPU (memory protection unit) as well, there's also no distinction between the user & kernel modes. In such a case, assuming we have a MMU-less system with some piece of hardware which is mapped in CPU address space, does it really make sense to have device drivers in the kernel, if all the hardware resources can be accessed from the userspace?
Does a kernel code have more control over memory, then the usercode?
Yes, on platforms without MMUs that host ucLinux it makes sense to do everything as if you had a normal embedded Linux environment. It is a cleaner design to have user applications and services go through their normal interfaces (syscalls, etc.) and have the OS route those kernel requests through to device drivers, file systems, network stack, etc.
Although the kernel does not have more control over the hardware in these circumstances, the actual hardware should only be touched by system software running in the kernel. Not limiting access to the hardware would make debugging things like system resets and memory corruption virtually impossible. This practice also makes your design more portable.
Exceptions may be for user mode debugging binaries that are only used in-house for platform bring-up and diagnostics.
My question is rather broad, I know, but I have been wondering about this for a long time.
A little background. I work in a Physics lab where all the lab computers are running Debian (mix of old version and Lenny) or more recently Ubuntu 10.4 LTS. We have written a lot of custom software to interface with experiment hardware and other computers.
We have a lot of FPGA boards that are controlling various parts of the experiment, these are connected via USB to different computers. After upgrading a computer controlling an experiment we started seeing crashes/lockups of the computer running all the lasers. This used to be completely stable.
My question is this: If the entire computer locks up because of an issue with
a) Python/GTK software gui
b) USB device driver
or
c) The actual device
can this be blamed on the Linux kernel (or other levels of the OS)?
Is it unfair to ask of the linux kernel not to panic even if I make mistakes in my implementation of software/hardware.
My own guess: Any user level applications should never be able to crash the entire system since they should only have access to their own stuff.
Any device driver becomes a part of the kernel itself and will therefore be able to crash it. Is my reasoning sound?
Bonus question: IS there a way to insulate device and kernel somehow such that Linux will keep running happily no matter what stupid mistakes are made with the hardware. That would be very useful for two reasons:
1) debugging is easier with a running system,
2) For the purposes of the experiment we really need long uptimes and having only a part of the system crash is infinitely better than crashes in one part of the system propagating to the rest.
Any links and reading material on this subject would be appreciated. Thank you.
You are correct that unprivileged code should not be able to bring down the system, unless there's a kernel bug. The line between unprivileged and privileged isn't exactly the same as user-space vs kernel, however. A user-mode program can open /dev/kmem and trash the OS's internal data structures, if the user account has superuser privileges.
To insulate the main kernel from device driver problems, run the device driver inside a virtual machine.
Several popular VM systems, including VMWare Workstation, support forwarding an arbitrary USB device from the host to the guest without a device-specific driver on the host.
I am using an at91sam9260 for my developments. There is a Linux kernel running in it and I start my own software on top of it.
I was wondering if I could use a JTAG debugger to debug the software I am working on without seeing to much of what is going on the Linux kernel ?
I am asking that because I think that I might become very complex to debug my software while seeing the full Linux execution.
In other words I would like to know if there could be some abstraction layer when debugging with JTAG probe?
Probably not -- as far as I know, most JTAG debuggers assume the ability of setting breakpoints in the processor. Under a multitasking OS, that stops the OS kernel too.
Embedded OS's like QNX have debuggers that operate on top of the OS kernel and which communicate over Ethernet.
Generally yes you can jtag as a debugger has absolutely nothing to do with what software you happen to be running on that processor. Where you can get into trouble is the cache, for example if you stop the processor want to change some instructions in ram, and restart, the changing of instructions in ram is a data access, which does not go through the instruction cache but the data cache, if you have a separate instruction and data cache, they are enabled and some of the instructions you have modified are at address that are in the instruction cache, you can get messed up pretty fast with new and stale instructions being fed to the processor. Linux likes to use the caches if there.
Second is the mmu, the processor/jtag is likely operating on the virtual addresses on the processor side of the mmu not the physical addresses, so depending on how the hardware works, if for example you set a breakpoint by address in a debug unit in the processor and the operating system task switches to another program/thread at that same address space, you will breakpoint on the wrong program at the right address. If the debugger/processor sets breakpoints by modifying an instruction in ram then you run into the cache problem above, IF not cached then you will break on the right instruction in the right thread, but then you have that cache problem.
Bottom line, absolutely, if the processor supports jtag based debugging that doesnt change based on whatever software you choose to run on that processor.
It depends on JTAG device and it's driver. Personally, I know only one device that capable of doing that: XDS560 + Code composer studio (CCS). But, there can be others.
I suggest to consult with manufacturer of your device.
For ARM, the Asset Arium family is claimed to be able to debug application code. I haven't tried it, though.