x86 Assembly program runs poorly under VirtualBox (XGA Graphics - VBE) - graphics

I'm trying to understand how virtualization is affecting my x86 assembly program.
Normally I boot up an old clunker pentium III and boot DOS off a USB key. Instead I tried setting up virtual box, and working that way. My programs seem to run fine, but the colors appear to be all screwed up.
It's a fairly straightforward program assembled under NASM that switches the video mode into XGA 4105h and draws some simple shapes in varying colors.
Is it safe to assume that the issue stems from the fact that the ACTUAL video mode of my machine isn't really running in XGA mode, so the bits for the colors are interpreted differently? (forgive the ignorance in explanation, as I know little about how virtualization works on an ISA level)
How might I get around this issue? I'd like to continue to do x86 graphics programming, but I like being able to work mobile.
EDIT: I see that at least under windows, when trying to execute 16-bit code, windows runs in a virtualized environment that doesn't give the program direct video-card access, but instead gives access to a "virtual card" which typically doesn't extend beyond VGA...
But because I am already in a virtual environment, how does that play into this? Am I totally hooped?

4105h is a standardized VBE modenumber for a VBE 1 bios and also for the DOSBOX. But starting with VBE 2 the mode numbers are not longer standardized. With a VBE 2 or VBE 3 bios we have to get the modenumbers from the card bios, if we do not use the DOSBOX.
For to get these mode numbers from a VBE 2 or a VBE 3 bios we have to use the VBE function 4F00 for to get the VBE SVGA information in a buffer of 512 Bytes. Inside of this buffer+0Eh we can find the address (OFFSET, SEGMENT) of the modelist and with this address we can get the modenumbers. The modelist ends with the word of 0FFFF.
Also if we use the DOSBOX, with each modenumber we can get the mode specific information with VBE function 4F01h in an other buffer of 256 Bytes. Now we can check the mode attributes, the resolution, the bytes per pixel, the bytes per scanline, the fild position and the mask size of the red, green and the blue color and other criteria, if they comply the requirements we are looking for.
Dirk

Related

How to find the number of bit of OS/390 or a z/OS?

What is the command for finding the number of bit of an OS/390 or a z/OS?
Since there didn't seem to be a "real" answer on this thread, I thought I'd provide one just in case anyone needs the information...
The definitive source of whether you're running in 64-bit mode is the STORE FACILITY LIST (STFL, or STFLE) hardware instruction. It sets two different bits - one to indicate that the 64-bit zArchitecture facility is installed, and one to indicate that the 64-bit zArchitecture facility is active (it was once possible to run in 31-bit mode on 64-bit hardware, so this would give you the "installed, but not active" case).
The operating system generously issues STFL/STFLE during IPL, saving the response in the PSA (that's low memory, starting at location 0). This is handy, since STFL/STFLE are privileged instructions, but testing low storage doesn't require anything special. You can check the value at absolute address 0xc8 (decimal 200) for the 0x20 bit to tell that the system is active in 64-bit mode, otherwise it's 31-bit mode.
Although I doubt there are any pre-MVS/XA systems alive anymore (that is, 24-bit), for completeness you can also test CVTDCB.CVTMVSE bit - if this bit is not set, then you have a pre-MVS/XA 24-bit mode system. Finding this bit is simple - but left as an exercise for the reader... :)
If you're not able to write a program to test the above, then there are a variety of ways to display storage, such as TSO TEST or any mainframe debugger, as well as by looking at a dump, etc.
While I was not able to find commands to give this information, I think below is what you're looking for:
According to this: https://en.wikipedia.org/wiki/OS/390
z/OS is OS/390 with various extensions including support for 64-bit architecture.
So if you're on a zSeries processor with z/OS, you're on 64-bit.
According to this: https://en.wikipedia.org/wiki/IBM_ESA/390
OS/390 was installed on ESA/390 computers, which were 32-bit computers, but were 31-bit addressable.
For either z/OS or OS/390 I believe you can do a D IPLINFO and look for ARCHLEVEL. ARCHLEVEL 1 = 31 bit, ARCHLEVEL 2 = 64 bit. But it's been a very long time since I've been on an OS/390 system.

How is Text produced on the Computer?

Lets say i want the text "Hello World" on my monitor, how does the computer represent, graphically, the text on the binary level?
That's a subjective question. It differs based on the hardware and potentially in application or in the OS.
In general, the hardware system you are using will have a defined text encoding that maps character images (or something similar, pixel patterns/screen colors) to given binary value(s). These images are loaded into the screen's memory buffer, which upon the next refresh is displayed on the screen.
So, in a very basic sense, let's say you have an embedded system with an LCD board. In this case it would not be images, but pixel patterns being mapped. You would likely have an 8-bit encoding that supports ASCII. You would load your binary values (that represent the text you want to display) into the LCD's memory/memory buffer. After the memory/buffer is loaded a command would need to be issued to the board to refresh. The display would change based on what you loaded into the memory.
If you are working very low level, then you would have to define that relationship at a driver level. Likely having to work with how to manipulate pixels via memory buffers based on binary values.
It gets more complex with say the computer you used to ask this question.
When you type something in your screen this is what basically happens:
1: The keyboard sends an electrical interrupt to the processor with the binary representation of the key you pressed (see ASCII)
2: The processor looks for the memory location (which was setup by the operating system) that has the instructions to handle the interrupt
3: The interrupt is then interpreted by the operating system (let's say, Linux)
4: If there's a process waiting for input, the operating system delivers the key code to that process (let's say, Bash)
5: Bash receives the code, and sends an instruction to the operating system to display certain characters in the screen device
6: The operating system receives the instruction from Bash, and sends it to the screen device
7: The screen device receives the instruction, translates the bits into pixels and shows them in your screen
All this is abstraction. In the end, everything is binary, and if you want to get there you first should understand the abstractions (Assembly, C, Operating Systems, devices, memory, processor, etc)

Why do some emulators need a BIOS dump?

Why do some emulators need a BIOS dump?
For example Playstation emulators do, but Gameboy and SNES emulators don't.
Most Gameboy and SNES emulators include the BIOS file in their implementations so you don't need to add BIOS dump files as external sources.
BIOS dumps contain intellectual property therefore it is illegal to distribute them without consent from the manufacturer. My guess is that most developers do not want to include any intellectual property in their emulators. It is the same reason why you won't find emulators being distributed with game ROMs.
EDIT
Taking Gameboy Advance as an example, according to "GBA BIOS FAQ":
The original BIOS code is copyrighted by Nintendo, and, for that reason, not included in the no$gba package.
No$gba includes some sort of a BIOS 'clone'. These 'simulated' functions are providing exactly the same return values as the real BIOS, including for undocumented and 'undefined' return values, and are fully compatible with most or all existing GBA software.
Taking Gameboy Classic and Color as an example, according to "Pan Docs" the gameboy BIOS provides the following functionalities:
displays Nintendo logo on the screen at the top and scrolls until it to the middle of the screen
plays two musical notes on the internal speaker
compares internal nintendo logo with cartridge nintendo logo, if they do not match gameboy halts.
performs cartridge header checksum
So, without the BIOS file, gameboy emulators won't perform these functions unless they are programmed in the emulator itself.
Basically if the programmer decides not to add the BIOS file to its emulator, he has two options: either he can allow users to add the BIOS file manually or he can add BIOS behaviour to his emulator.

Is kernel or userspace responsible for rotating framebuffer to match screen

I'm working on embedded device with screen rotated 90 degrees clockwise: screen controller reports 800x600 screen, while device's screen is 600x800 portrait.
What do you think, whose responsibility it is to compensate for this: should kernel rotate framebuffer to provide 800x600 screen as expected by upper-level software or applications (X server, bootsplash) should adapt and draw to rotated screen?
Every part of stack is free software, so there are no non-technical problems for modification, the question is more about logical soundness.
It makes most sense for the screen driver to do it - the kernel after all is supposed to provide an abstraction of the device for the userspace applications to work with. If the screen is a 600x800 portrait oriented device, then that's what applications should see from the kernel.
yes,I agree, The display driver should update the display accordingly and keep the control
Not sure exactly how standard your embedded device is, if it is running a regular linux kernel, you might check in the kernel configurator (make xconfig, when compiling a new kernel) , one of the options for kernel 2.6.37.6 in the device, video card section, is to enable rotation of the kernel messages display so it scrolls 90 degrees left or right while booting up.
I think it also makes your consoles be rotated correctly after login too.
This was not available in kernels even 6-8 months ago, at least not available in kernel that slackware64 13.37 came with about that time.
Note that the bios messages are still rotated on a PC motherboard,
but that is hard-coded in the bios, which may not apply to the embedded system you are working with.
If this kernel feature is not useful to you for whatever reason, how they did it in the linux kernel might be good example of where and how to go about it. Once you get the exact name of the option from "make xconfig", it should be pretty easy to search where ever they log the kernel traffic for that name and dig up some info about it.
Hmmm. I just recompiled my kernel today, and I may have been wrong about how new this option is. Looks like it was available with some kernel versions before the included-with-Slackware64 versions that I referenced. Sorry!

Protected Mode Keyboard Access on x86 Assembly

I'm working on keyboard input for a very basic kernel that I'm developing and I'm completely stuck. I can't seem to find any information online that can show me the information I need to know.
My kernel is running in protected mode right now, so I can't use the real mode keyboard routines without jumping into real mode and back, which I'm trying to avoid. I want to be able to access my keyboard from protected mode. Does anyone know how to do this? The only thing I have found so far is that it involves talking to the controller directly using in/out ports, but beyond that I'm stumped. This is, of course, is not something that comes up very often. Normally, Assembly tutorials assume you're running an operating system underneath.
I'm very new to the x86 assembly, so I'm just looking for some good resources for working with the standard hardware from protected mode. I'm compiling the Assembly source code with NASM and linking it to the C source code compiled with DJGPP. Any suggestions?
The MIT operating systems class has lots of good references. In particular, check out Adam Chapweske's resources on keyboard and mouse programming.
In short, yes, you will be using the raw in/out ports, which requires either running in kernel mode, or having the I/O permission bits (IOPL) set in the EFLAGS register. See this page for more details on I/O permissions.
You work with standard legacy hardware the same way on real and protected modes. In this case, you want to talk with the 8042 at I/O ports 0x60 to 0x6f, which in turn will talk to the controller within the keyboard at the other end of the wire.
A quick Google search found me an interesting resource at http://heim.ifi.uio.no/~stanisls/helppc/8042.html (for the 8042) and http://heim.ifi.uio.no/~stanisls/helppc/keyboard_commands.html (for the keyboard).
In case you are not used to it, you talk with components at I/O ports via the IN (read) and OUT (write) opcodes, which receive the I/O port number (a 16-bit value) and the value to be read or written (either 8, 16, or 32 bits). Note that the size read or written is important! Writing 16 bits to something which is expecting 8 bits (or vice versa) is a recipe for disaster. Get used to these opcodes, since you will be using them a lot (it is the only way to talk to some peripherals, including several essential ones; other peripherals use memory-mapped I/O (MMIO) or bus-mastering DMA).
The 8042 PS/2 Controller looks like the simplest possibility.
The oszur11 OS tutorial contains a working example under https://sourceforge.net/p/oszur11/code/ci/master/tree/Chapter_06_Shell/04_Makepp/arch/i386/arch/devices/i8042.c
Just:
sudo apt-get install build-essential qemu
sudo ln -s /usr/bin/qemu-system-i386 /usr/bin/qemu
git clone git://git.code.sf.net/p/oszur11/code oszur11
cd oszur11/Chapter_06_Shell/04_Makepp
make qemu
Tested on Ubuntu 14.04 AMD64.
My GitHub mirror (upstream inactive): https://github.com/cirosantilli/oszur11-operating-system-examples
Not reproducing it here because the code it too long, will update if I manage to isolate the keyboard part in a minimal example.

Resources