Bios Interrupts in assembly language - linux

How to call interrupt in assembly language using NASM on 32 bit architecture. I try many time but result not desired.
On Linux "core dump error" and on Windows nothing happens on CMD. I read some deep that in 32 bit user application are run under ring 3 level and kernel and driver are run in ring 1. How can I do that in user level?
I follow someone on YouTube he work very well on Visual Studio with C++ or C (with inline and external assembly file ) but when I call any interrupt in external file or inline, Visual Studio says memory location violation error.
Intel 32 bit architecture (Ring level).

You can't use BIOS (or DOS) interrupts under an OS like Linux or Windows. Use system calls (Linux) or WinAPI library calls (Windows).
There is no portable ABI for interacting with the system outside your own process in assembly that works under both Linux and Windows; MacOS is also incompatible.
To use a BIOS interrupt:
make sure the BIOS exists and all of the state it depends on hasn't been modified. If the computer booted with UEFI then BIOS doesn't exist. If an OS has started it's would've nuked the state (e.g. PIC chip config, PIT config, PCI configuration space, BIOS data area, IVT, ...) that the BIOS depends on.
make sure you're in real mode or similar. If your code is 32-bit then you need to switch back to real mode, or setup a virtual8086 task (and its monitor), or use some sort of emulator (e.g. to interpret the BIOS's code instead of executing it directly).
Note that there are some special cases (e.g. the old "Advanced Power Management" API that was superseded by ACPI, the VESA BIOS Extensions) where a protected mode interface is provided as an (sometimes optional) alternative. These are mostly painful (e.g. involve setting up special descriptors for "16-bit protected mode" and copying binary blobs into them) and almost never worth the hassle.

Related

Linux's system calls for GUI? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
I'll take x86 as an example as I am more aware of x86 stuff than ARM stuff. Also, I may get some information wrong as I've been doing some research on this question while answering. Feel free to correct me if I am wrong.
System booting
Some time ago, Linux used to boot with a legacy bootloader (GRUB legacy version). The GRUB bootloader would be started by the BIOS at 0x7c00 in RAM and then would read the kernel from the hard-disk. It would follow the multiboot specification. The multiboot specification mentions the state that the computer needs to be in before jumping to the kernel's entry point. The kernel would then launch a first process (init) that every process would be a child of.
Today, most Linux distributions boot with UEFI (with the option of legacy booting also available). A UEFI app is placed on the boot partition partionned as a GPT ESP (EFI System Partition). This EFI app is launched and then follows the Linux Boot Protocol to launch Linux. The init process was also replaced by systemd. Linux will thus launch systemd as the first process of the computer. Actually, as stated on the manpage for systemd:
systemd is usually not invoked directly by the user, but is
installed as the /sbin/init symlink and started during early
boot.
The process that will be started is thus /sbin/init but it is a symlink to systemd. The systemd process will then read several configuration files on the hard-disk called units. These units are often targets which specify several units to read. Targets are thus units which specify several units to read. At first systemd will read default.target which specifies several other units. Some of these other units will start some processes among which is the Display Manager (fancy terminology which means login prompt). Recently, Ubuntu starts the Gnome Display Manager (GDM) as the first displaying program (gdm.service unit). This program will start the X server before presenting the user login screen (https://en.wikipedia.org/wiki/X_display_manager).
When the display manager runs on the user's computer, it starts the X server before presenting the user the login screen, optionally repeating when the user logs out.
Once logged in, GDM will start several other binaries responsible to let you interact with the system (the actual desktop, a binary to gather input for this desktop, etc). All of these components depend on the X server to work properly.
The DRM
The X server is a user program which makes extensive use of the Direct Rendering Manager (DRM) of the Linux kernel. The DRM is a system call interface which is used to interact with graphics cards. When the DRM detects a graphics card, it exposes a file like /dev/dri/card0 which is a character device (http://manpages.ubuntu.com/manpages/bionic/man7/drm.7.html).
In earlier days, the kernel framework was solely used to provide raw hardware access to
priviledged user-space processes which implement all the hardware abstraction layers. But
more and more tasks were moved into the kernel. All these interfaces are based on ioctl(2)
commands on the DRM character device. The libdrm library provides wrappers for these
system-calls and many helpers to simplify the API.
When a GPU is detected, the DRM system loads a driver for the detected hardware type. Each
connected GPU is then presented to user-space via a character-device that is usually
available as /dev/dri/card0 and can be accessed with open(2) and close(2). However, it
still depends on the grapics driver which interfaces are available on these devices. If an
interface is not available, the syscalls will fail with EINVAL.
The ioctl call allows to have any number of operations on the /dev/dri/card0 file since it is a general call which includes a request argument which is simply an unsigned long. It also takes a variable amount of arguments (see https://man7.org/linux/man-pages/man2/ioctl.2.html).
The ioctl call thus allows hardware vendors (like NVIDIA, AMD, etc) to provide drivers for their cards with the general ioctl call used as a general interface between user mode and kernel mode.
OpenGL
There exists several 3D rendering APIs available (OpenGL, Direct3D). OpenGL is mostly a set of C headers and a convention. The convention says what a certain call should do. It is up to the hardware vendor to implement the convention for their own card. Mesa3D has been an attempt to create an open source implementation of OpenGL for certain graphics cards. It worked quite well for integrated Intel HD Graphics (since documentation is open) and AMD (since they cooperated and offered some insight into the workings of their cards), but not for NVIDIA (the Nouveau driver is mostly not working or slow).
When you program some OpenGL, you include the OpenGL headers and link with libraries provided by hardware vendors which provide the definitions of the functions in the headers. These definitions make use of the DRM and cooperate with the X server to show content on the screen.
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
System calls (provided by the kernel) are often buried (e.g. in some cases deliberately undocumented and proprietary) and should not be used. Almost everything you see are actually normal functions in dynamically linked libraries/shared libraries. This allows the kernel's system calls to be radically changed without breaking everything (because everything only depends on the dynamically linked libraries/shared libraries); and reduces the functionality needed in the kernel itself.
For an example; most of the "system calls for managing windows and GUI components" you think Windows has could (internally, inside the relevant DLL) just end up using a single "send_message()" system call (to tell a different process, the GUI, that you want to create a window or change its position or ...).
For Linux it's roughly similar. The kernel's system calls (which actually are documented, for no sane reason - it goes against the spirit of SYS-V specs and means badly written "linux executables" aren't compatible with other Unix clones like FreeBSD or Solaris or OSX) exist to use things like low level memory management and raw file IO and sockets; but (like Windows) the kernel's system calls are buried under layers of shared libraries, and those shared libraries (e.g. like Xlib, GLib, KWindowSystem, Qt, ...) just use "something" (file IO, pipes, sockets, ...) provided by kernel to talk to another process (display server, GUI, ..).
Linux and Windows fall under separate categories; Linux is just a kernel, i.e. the piece under the hood that gives us the basic functionality we expect to run programs, like threads, memory and process management, etc. Windows is a full operating system, including the user facing components and numerous system libraries. An apter comparison would be a specific Linux distro and Windows.
On that note, distros, as independent operating systems, obviously can have different implementations of any OS component. Some distros, like Arch, don't come with a GUI by default at all. That said, essentially the entire Linux ecosystem uses Xorg and/or Wayland; I would recommend looking into the implementation details of those two.
A Linux GUI has quite a few differences compared to Windows GUI. For example, the GUI is not considered to be a part of the operating system, but rather an external part of it; that means no syscalls (not embedded whatsoever in the OS). After all, like the previous answer says, Linux is a kernel, that means it's only something really basic (allows execution of programs, memory/threads management, processes management, but not really much more). Whatever comes next (GUI, for example) are added features using packages.
This allows, for example, installing a GUI on top of a minimal installation of any Linux distro (CentOS, for example), and that GUI can be the one you want (Gnome, KDE...).

Is an operating system kernel an interpeter for all other programs?

So, from my understanding, there are two types of programs, those that are interpreted and those that are compiled. Interpreted programs are executed by an interpreter that is a native application for the platform its on, and compiled programs are themselves native applications (or system software) for the platform they are on.
But my question is this: is anything besides the kernel actually being directly run by the CPU? A Windows Executable is a "Windows Executable", not an x86 or amd64 executable. Does that mean every other process that's not the kernel is literally being interpreted by the kernel in the same way that a browser interprets Javascript? Or is the kernel placing these processes on the "bare metal" that the kernel sits on top of?
IF they're on the "bare metal", how, say does Windows know that a program is a windows program and not a Linux program, since they're both compiled for amd64 processors? If it's because of the "format" of the executable, how is that executable able to run on the "bare metal", since, to me, the fact that it's formatted to run on a particular OS would mean that some interpretation would be required for it to run.
Is this question too complicated for Stack Overflow?
They run on the "bare metal", but they do contain operating system-specific things. An executable file will typically provide some instructions to the kernel (which are, arguably, "interpreted") as to how the program should be loaded into memory, and the file's code will provide ways for it to "hook" in to the running operating system, such as by an operating system's API or via device drivers. Once such a non-interpreted program is loaded into memory, it runs on the bare metal but continues to communicate with the operating system, which is also running on the bare metal.
In the days of single-process operating systems, it was common for executables to essentially "seize" control of the entire computer and communicate with hardware directly. Computers like the Apple ][ and the Commodore 64 work like that. In a modern multitasking operating system like Windows or Linux, applications and the operating system share use of the CPU via a complex multitasking arrangement, and applications access the hardware via a set of abstractions built in to the operating system's API and its device drivers. Take a course in Operating System design if you are interested in learning lots of details.
Bouncing off Junaid's answer, the way that the kernel blocks a program from doing something "funny" is by controlling the allocation and usage of memory. The kernel requires that memory be requested and accessed through it via its API, and thus protects the computer from "unauthorized" access. In the days of single-process operating systems, applications had much more freedom to access memory and other things directly, without involving the operating system. An application running on an old Apple ][ can read to or write to any address in RAM that it wants to on the entire computer.
One of the reasons why a compiled application won't just "run" on another operating system is that these "hooks" are different for different operating systems. For example, an application that knows how to request the allocation of RAM from Windows might not have any idea how to request it from Linux or the Mac OS. As Disk Crasher mentioned, these low level access instructions are inserted by the compiler.
I think you are confusing things. A compiled program is in machine readable format. When you run the program, kernel will allocate memory, cpu etc and ensure that the program does not interfere with other programs. If the program requires access to HW resources or disk etc, the kernel will handle it so kernel will always be between hardware and any software you run in user space.
If the program is interpreted, then a relevant interpreter for that language will convert the code to machine readable on the fly and kernel will still provide the same functionality like access to hardware and making sure programs aren't doing anything funny like trying to access other program memory etc.
The only thing that runs on "bare metal" is assembly language code, which is abstracted from the programmer by many layers in the OS and compiler. Generally speaking, applications are compiled to an OS and CPU architecture. They will not run on other OS's, at least not without a compatible framework in place (e.g. Mono on Linux).
Back in the day a lot of code used to be written on bare metal using macro assemblers, but that's pretty much unheard of on PCs today. (And there was even a time before macro assemblers.)

Can I use JTAG to debug my program on top of embedded Linux?

I am using an at91sam9260 for my developments. There is a Linux kernel running in it and I start my own software on top of it.
I was wondering if I could use a JTAG debugger to debug the software I am working on without seeing to much of what is going on the Linux kernel ?
I am asking that because I think that I might become very complex to debug my software while seeing the full Linux execution.
In other words I would like to know if there could be some abstraction layer when debugging with JTAG probe?
Probably not -- as far as I know, most JTAG debuggers assume the ability of setting breakpoints in the processor. Under a multitasking OS, that stops the OS kernel too.
Embedded OS's like QNX have debuggers that operate on top of the OS kernel and which communicate over Ethernet.
Generally yes you can jtag as a debugger has absolutely nothing to do with what software you happen to be running on that processor. Where you can get into trouble is the cache, for example if you stop the processor want to change some instructions in ram, and restart, the changing of instructions in ram is a data access, which does not go through the instruction cache but the data cache, if you have a separate instruction and data cache, they are enabled and some of the instructions you have modified are at address that are in the instruction cache, you can get messed up pretty fast with new and stale instructions being fed to the processor. Linux likes to use the caches if there.
Second is the mmu, the processor/jtag is likely operating on the virtual addresses on the processor side of the mmu not the physical addresses, so depending on how the hardware works, if for example you set a breakpoint by address in a debug unit in the processor and the operating system task switches to another program/thread at that same address space, you will breakpoint on the wrong program at the right address. If the debugger/processor sets breakpoints by modifying an instruction in ram then you run into the cache problem above, IF not cached then you will break on the right instruction in the right thread, but then you have that cache problem.
Bottom line, absolutely, if the processor supports jtag based debugging that doesnt change based on whatever software you choose to run on that processor.
It depends on JTAG device and it's driver. Personally, I know only one device that capable of doing that: XDS560 + Code composer studio (CCS). But, there can be others.
I suggest to consult with manufacturer of your device.
For ARM, the Asset Arium family is claimed to be able to debug application code. I haven't tried it, though.

How do emulator/virtual computer work?

How does an emulator work like the android one. And a virtual pc live VirtualBox?
In one form the software reads the binary in the same way that hardware on the real system would, it fetches instructions decodes them and executes them using variables in a program instead of registers in hardware. Memory and other I/O is similarly emulated/simulated. To be interesting beyond just an instruction set simulator it needs to also simulate hardware, so it may have software that pretends to behave for example like a VGA video card the software run on the emulator ideally cannot tell the Memory/I/O is from simulated hardware, ideally you do enough to fool the software being simulated. Also though you try to honor what those register writes and reads mean by making calls to the operating system you are running on and/or hardware directly (assuming your program of course thinks it is talking to hardware and not an emulator).
The next level up would be a virtual machine. For the case I am describing it is a matching instruction set, so you are say wanting to virtualize an x86 program on an x86 host machine. The long and short of it is the host processor/machine has hardware features that allow you to run the actual instructions of the program being virtualized. so long as the instructions are simple register based or stack or other local memory, once the program ventures out of its memory space the virtualization hardware will interrupt the operating system, the virtual machine software like vmware or virtualbox then examines the Memory or I/O request from the software being virtualized and then determines if that was a video card request or usb device or nic or whatever, and then it emulates the device in question much in the same way that a pure non-virtualized setup would. A virtual machine can often outrun a purely emulated machine because it allows a percentage of the software to run at the full speed of the processor. the downside is you have to have a virtual machine that matches the software being run. An emulator can be far more accurate and portable than a virtual machine at the cost of performance.
The next level up would be something like wine or cygwin where not only are you trying to do something like a virtual machine and run native instructions and trap memory requests but you are going beyond that and trying to trap operating system calls so that you can run a program compiled for one operating system on another operating system, but much faster than a virtual machine. Instead of traping the hardware level register or memory access to a video card, you trap the operating system call for a bitblt or fill or line draw or string draw with a specific font, etc. Then you can translate that operating system request with calls to the native operating system.
At their simplest emulators or virtual computers provide an abstraction layer built on top of the host system (the actual physical system running the emulator) that implements the emulated system's functionality to the code that is to be run.
Emulators and virtual machines simulate hardware like a PC or an android phone. A virtual machine (or virtual pc) looks at an operating system's machine code instructions and runs them on top of your current (host) operating system in a virtual computer.
http://en.wikipedia.org/wiki/Virtual_machine
and
http://en.wikipedia.org/wiki/Emulator
Depending on the type or virtualization, a virtual machine is not always an emulator.

Using a 64 bit driver in a 32 bit program. Windows

This is only a half-way programming question. First of all I have a PCI-Express card and 32/64 bit drivers. The target operating system has to be a Windows 64 bit system. I read that under Vista64 all drivers have to be certified 64 bit drivers. Is this a general restriction under 64 bit operating systems and does this also apply to "XP 64" or any Linux system?
So for simplicity let's say I use a 64 bit driver for my PCIe card under Vista64 and have a bunch of 64 bit DLLs to use the cards functionality. On the other side there's a large, legacy 32 bit exe program which needs to use the PCIe device. Converting the program to 64 bit would be a really huge effort.
So what can be done to bring that 32 bit program and the 64 bit driver together? I read that mixing 32/64 bit binaries and DLLs is not possible at all but this is hard to believe for me. I'm sure you can print out a document under Vista64 from within a 32 bit app and Windows will somehow wrap this around to a 64 bit printer driver.
64-bit certification is only required under Vista; there is no certifying authority for non-Windows platforms, and I don't believe that XP or Windows Server checks for certification (not sure though, and it may depend on which service pack you're on).
If you're using the driver via the Windows API, then there shouldn't be any problem; Windows will do the 32<->64-bit translations in the kernel. If you're trying to load the driver inside your own process, that probably won't be possible. As Dirk says you'll have to run it inside its own process and communicate through a COM server. I'm not sure what hoops you'll have to jump through if you have to run your driver in a higher-privilege execution level and want to make calls to it from user mode.
Hopefully your 64-bit DLLs offer a 32-bit API, or Windows offers a standard driver interface (if it's a common I/O device like a display or network card).
Does your 32-bit application directly call the driver? (I'm guessing a simulator for the driver!)
The only way to communicate between 32-bit and 64-bit dlls is to write a COM server that manages the communication (read: wrap EITHER the applications calls OR the 64-bit driver responses) in between.
One thing that came back to bite me: When I first wrote this COM server (yes, I too had to bear many sleepless nights before I came to know of this trick) I only built the 32-bit version of the (auto-generated) proxy/stub dll. Another bout of sleepless nights ensued before I came to know of the solution: Build the proxy/stub dll for both 32-bit and 64-bit. The 32-bit side deals with the 32-bit side (in your case the application) and the 64-bit with the 64-bit side (the driver). COM manages how the differnt versions of the proxy/stub talk to each other. And oh, do get the server registered on your system. Easy, right?
I think the whole point of a driver is to abstract away the actually workings of the hardware and present a common interface to the software. In this case, the PCIe driver needs to be 64-bit so that it can act as a go-between for Windows and the hardware, but I would think that a 32-bit application could then access the device without any troubles at all.
What's meant by that incompatibility you read about is that 32 and 64-bit assemblies can't be part of the same application - an application has to target either one or the other, though 32-bit application will generally run fine on Windows x64 using WoW64, which just acts as a translator.
Are you currently experiencing problems, or are you just asking hypothetically?

Resources