I'm a teenager who has become very interested in assembly language. I'm trying to write a small operating system in Intel x86 assembler, and I was wondering how to write directly to the screen, as in without relying on the BIOS or any other operating sytems. I was looking through the sources of coreboot, Linux, and Kolibri, among others, in the hopes of finding and understanding some piece of code that does this. I have not yet succeeded in this regard, though I believe I'll take another look at the Linux source code, it being the most understandable to me of the sources I've searched through.
If anybody knows this, or knows where in some piece of source code that I could look at, I would appreciate it if they told me.
Or better yet, if someone knows how to identify what I/O port on an Intel x86 CPU connects to what piece of hardware, that would be appreciated too. The reason I need to ask this, is that in neither the chapter for input/output in the Intel 64 and IA-32 Architectures Software Developer's Manual Volume 1: Basic Architecture, nor in the sections for the IN or OUT instruction in Volume 3, could I find any of this information. And because it has been too arduous to search for the relevant instructions in the sources that I have.
PART 1
For old VGA modes, there's a fixed address to write to the (legacy) display memory area. For text modes this area starts at 0x000B8000. For graphics modes it starts at 0x000A0000.
For high-resolution video modes (e.g. those set by the VESA/VBE interface) this doesn't work because the size of legacy display memory area is limited to 64 KiB and most high-resolution video modes need a lot more space (e.g. 1024 * 768 * 32-bpp = 2.25 MiB). To get around that there's 2 different methods supported by VBE.
The first method is called "bank switching", where only part of the video card's display memory is mapped into the legacy area at any time (and you can change which part is mapped). This can be quite messy - for example, to draw one pixel you might need to calculate which bank the pixel is in, then switch to that bank, then calculate which offset in the bank. To make this worse, for some video modes (e.g. 24-bpp video modes where there's 3 bytes per pixel) only the first part of a pixel's data might be in one bank and the second part of the same pixel's data is in a different bank. The main benefit of this is that it works with real mode addressing, as the legacy display memory area is below 0x00100000.
The second method is called "Linear Framebuffer" (or just "LFB"), where the video card's entire display memory area can be accessed without any messy bank switching. You have to ask the VESA/VBE interface where this area is (and it's typically in the "PCI hole" somewhere between 0xC0000000 and 0xFFF00000). This means you can't access it in real mode, and need to use protected mode or long mode or "unreal mode".
To find the address of a pixel when you're using an LFB mode, you'd do something like "pixel_address = display_memory_address + y * bytes_per_line + x * bytes_per_pixel". The "bytes_per_line" comes from the VESA/VBE interface (and may not be the same as "horizontal_resolution * bytes_per_pixel" because there can be padding between horizontal lines).
For "bank switched" VBE/VESA modes, it becomes something more like:
pixel_offset = y * bytes_per_line + x * bytes_per_pixel;
bank_number = pixel_offset / bank_size;
pixel_starting_address_within_bank = pixel_offset % bank_size;
For some old VGA modes (e.g. the 256-colour "mode 0x13") it's very similar to LFB, except there is no padding between lines and you can do "pixel_address = display_memory_address + (y * horizontal_resolution + x) * bytes_per_pixel". For text modes it's basically the same thing, except 2 bytes determine each character and its attribute - e.g. "char_address = display_memory_address + (y * horizontal_resolution + x) * 2". For other old VGA modes (monochrome/2-colour, 4-colour and 16-colour modes) the video card's memory is arranged completely differently. It's split into "planes" where each plane contains one bit of the pixel, and (for e.g.) to update one pixel in a 16-colour mode you need to write to 4 separate planes. For performance reasons the VGA hardware supports different write modes and different read modes, and it can get complicated (too complicated to describe adequately here).
PART 2
For I/O ports (on 80x86, "PC compatibles"), there's 3 general categories. The first is "de facto standard" legacy devices which use fixed I/O ports. This includes things like the PIC chips, ISA DMA controller, PS/2 controller, PIT chip, serial/parallel ports, etc. Almost anything that describes how to program each of these devices will tell you which I/O ports the device uses.
The next category is legacy/ISA devices, where the I/O ports the devices use is determined by jumpers on the card itself, and there's no sane way to determine which I/O ports they use from software. To get around this the end-user has to tell the OS which I/O ports each device uses. Thankfully this crusty stuff has all become obsolete (although that doesn't necessarily mean that nobody is using it).
The third category is "plug & play", where there's some method of asking the device which I/O ports it uses (and in most cases, changing the I/O ports the device uses). An example of this is PCI, where there's a "PCI configuration space" that tells you lots of information about each PCI device. For this categories, there is no way anyone can determine which devices will be using which I/O ports without doing it at run-time, and changing some BIOS settings can cause any/all of these devices to change I/O ports.
Also note that an Intel CPU is only a CPU. Nothing prevents those CPUs from being used in something that is radically different to a "PC compatible" computer. Intel's CPU manuals will never tell you anything about hardware that exists outside of the CPU itself (including the chipset or devices).
Part 3
Probably the best place to go for more information (that's intended for OS developers/hobbyists) is http://osdev.org/ (their wiki and their forums).
To write directly to the screen, you should probably write to the VGA Text Mode area. This is a block of memory which is a buffer for text mode.
The text-mode screen consists of 80x25 characters; each character is 16 bits wide. If the first bit is set the character will blink on-screen. The next 3 bits then detail the background color; the final 4 bits of the first byte are the foreground (or the text character)'s color. The next 8 bits are the value of the character. This is usually code-page 737 or 437, but it could vary from system to system.
Here is a Wikipedia page detailing this buffer, and here is a link to codepage 437
Almost all BIOSes will set the mode to text mode before your system is booted, but some laptop BIOSes will not boot into text mode. If you are not already in text mode, you can set it with int10h very simply:
xor ah, ah
mov al, 0x03
int 0x10
(The above code uses BIOS interrupts, so it has to be run in Real Mode. I suggest putting this in your bootsector.)
Finally, here is a set of routines I wrote for writing strings in protected mode.
unsigned int terminalX;
unsigned int terminalY;
uint8_t terminalColor;
volatile uint16_t *terminalBuffer;
unsigned int strlen(const char* str) {
int len;
int i = 0;
while(str[i] != '\0') {
len++;
i++;
}
return len;
}
void initTerminal() {
terminalColor = 0x07;
terminalBuffer = (uint16_t *)0xB8000;
terminalX = 0;
terminalY = 0;
for(int y = 0; y < 25; y++) {
for(int x = 0; x < 80; x++) {
terminalBuffer[y * 80 + x] = (uint16_t)terminalColor << 8 | ' ';
}
}
}
void setTerminalColor(uint8_t color) {
terminalColor = color;
}
void putCharAt(int x, int y, char c) {
unsigned int index = y * 80 + x;
if(c == '\r') {
terminalX = 0;
} else if(c == '\n') {
terminalX = 0;
terminalY++;
} else if(c == '\t') {
terminalX = (terminalX + 8) & ~(7);
} else {
terminalBuffer[index] = (uint16_t)terminalColor << 8 | c;
terminalX++;
if(terminalX == 80) {
terminalX = 0;
terminalY++;
}
}
}
void writeString(const char *data) {
for(int i = 0; data[i] != '\0'; i++) {
putCharAt(terminalX, terminalY, data[i]);
}
}
You can read up about this on this page.
A little beyond my scope but you might want to look into VESA.
This is not so simple. While BIOS provides INT 10h to write text to the screen, graphics differs from one adapter to the next. For example, you can find information for VGA http://www.wagemakers.be/english/doc/vga here. Some ancient SVGA adapters http://www.intel-assembler.it/portale/5/assembly-game-programming-encyclopedia/assembly-game-programming-encyclopedia.asp here.
For general I/O ports, you have to go through the BIOS, which means interrupts. Many blue moons ago, I used the references from Don Stoner to help writing some real-mode assembly, but I burnt out on it after a few months and forgot most of what I knew.
Related
I'm trying to communicate between two Linux systems via UART.
I want to send large chunks of data. With the specified Baudrate it should take around 5 seconds, but it takes nearly 10 times the expected time.
As I'm sending more than the buffer can handle at once it is send in small parts and I'm draining the buffer in between. If I measure the time needed for the drain and the number of bytes written to the buffer I calculate a Baudrate nearly 10 times lower than the specified Baudrate.
I would expect a slower transmission as the optimal, but not this much.
Did I miss something while setting the UART or while writing? Or is this normal?
The code used for setup:
int bus = open(interface.c_str(), O_RDWR | O_NOCTTY | O_NDELAY); // <- also tryed blocking
if (bus < 0) {
return;
}
struct termios options;
memset (&options, 0, sizeof options);
if(tcgetattr(bus, &options) != 0){
close(bus);
bus = -1;
return;
}
cfsetspeed (&options, B230400);
cfmakeraw(&options); // <- also tried this manually. did not make a difference
if(tcsetattr(bus, TCSANOW, &options) != 0)
{
close(bus);
bus = -1;
return;
}
tcflush(bus, TCIFLUSH);
The code used to send:
int32_t res = write(bus, data, dataLength);
while (res < dataLength){
tcdrain(bus); // <- taking way longer than expected
int32_t r = write(bus, &data[res], dataLength - res);
if(r == 0)
break;
if(r == -1){
break;
}
res += r;
}
B230400
The docs are contradictory. cfsetspeed is documented as requiring a speed_t type, while the note says you need to use one of the "B" constants like "B230400." Have you tried using an actual speed_t type?
In any case, the speed you're supplying is the baud rate, which in this case should get you approximately 23,000 bytes/second, assuming there is no throttling.
The speed is dependent on hardware and link limitations. Also the serial protocol allows pausing the transmission.
FWIW, according to the time and speed you listed, if everything works perfectly, you'll get about 1 MB in 50 seconds. What speed are you actually getting?
Another "also" is the options structure. It's been years since I've had to do any serial I/O, but IIRC, you need to actually set the options that you want and are supported by your hardware, like CTS/RTS, XON/XOFF, etc.
This might be helpful.
As I'm sending more than the buffer can handle at once it is send in small parts and I'm draining the buffer in between.
You have only provided code snippets (rather than a minimal, complete, and verifiable example), so your data size is unknown.
But the Linux kernel buffer size is known. What do you think it is?
(FYI it's 4KB.)
If I measure the time needed for the drain and the number of bytese written to the buffer I calculate a Baudrate nearly 10 times lower than the specified Baudrate.
You're confusing throughput with baudrate.
The maximum throughput (of just payload) of an asynchronous serial link will always be less than the baudrate due to framing overhead per character, which could be two of the ten bits of the frame (assuming 8N1). Since your termios configuration is incomplete, the overhead could actually be three of the eleven bits of the frame (assuming 8N2).
In order to achieve the maximum throughput, the tranmitting UART must saturate the line with frames and never let the line go idle.
The userspace program must be able to supply data fast enough, preferably by one large write() to reduce syscall overhead.
Did I miss something while setting the UART or while writing?
With Linux, you have limited access to the UART hardware.
From userspace your program accesses a serial terminal.
Your program accesses the serial terminal in a sub-optinal manner.
Your termios configuration appears to be incomplete.
It leaves both hardware and software flow-control untouched.
The number of stop bits is untouched.
The Ignore modem control lines and Enable receiver flags are not enabled.
For raw reading, the VMIN and VTIME values are not assigned.
Or is this normal?
There are ways to easily speed up the transfer.
First, your program combines non-blocking mode with non-canonical mode. That's a degenerate combination for receiving, and suboptimal for transmitting.
You have provided no reason for using non-blocking mode, and your program is not written to properly utilize it.
Therefore your program should be revised to use blocking mode instead of non-blocking mode.
Second, the tcdrain() between write() syscalls can introduce idle time on the serial link. Use of blocking mode eliminates the need for this delay tactic between write() syscalls.
In fact with blocking mode only one write() syscall should be needed to transmit the entire dataLength. This would also minimize any idle time introduced on the serial link.
Note that the first write() does not properly check the return value for a possible error condition, which is always possible.
Bottom line: your program would be simpler and throughput would be improved by using blocking I/O.
My group and I are trying to create a synthesizer out of a DE2-115 board for our undergraduate capstone project.
The only thing we can't figure out is how to get the frequencies mapped to the "keys" outputted properly through the audio port on board. We've scoured the web and all provided documentation included the datasheets for the codec but we can't figure out how to get it to work properly in VHDL.
Has anyone ever worked with outputting audio through the WM8731 and if so, would they be willing to help us out?
I did that some years ago, wasn't too hard, but I used a NIOS processor with SOPC builder.
I used the Altera University Program IP cores available here.
These cores provides different functionality related to the DE2 and possibly other altera sponsered-board.
According to my logs, I used 3 of these cores to make audio work.
The altera_up_avalon_audio_and_video_config, which is used to configure the audio CODEC chip at initialization.
The second IP provide data in/out interface with the audio chip: altera_up_avalon_audio.
The last one is altera_up_avalon_clocks. I can't remember exactly what it does, but as the name imply it's necessary for the clocking of the audio chip. I think it takes an input clock and generate a PLL to provide the right clock to the CODEC.
As I said, I used a NIOS processor, still according to my log, the C code I used is:
void audio_isr(void* context, alt_u32 id)
{
const int len = 2682358;
static signed char *ptr = test_snd;
unsigned int x[128];
alt_up_audio_dev *audio_dev = (alt_up_audio_dev *)context;
unsigned int n = alt_up_audio_write_fifo_space(audio_dev, ALT_UP_AUDIO_RIGHT);
for(unsigned int i = 0; i < n; i++) {
x[i] = 0x800000 + ((int)*ptr++) << 9;
if (ptr > test_snd+len) {
ptr = test_snd;
printf("Done\n");
}
}
alt_up_audio_write_fifo(audio_dev, x, n, ALT_UP_AUDIO_RIGHT);
alt_up_audio_write_fifo(audio_dev, x, n, ALT_UP_AUDIO_LEFT);
}
static void audio_init(void)
{
alt_up_audio_dev *audio_dev = alt_up_audio_open_dev (AUDIO_0_NAME);
if ( audio_dev == NULL)
printf ("Error: could not open audio device \n");
else
printf ("Opened audio device \n");
alt_up_audio_reset_audio_core(audio_dev);
alt_up_audio_disable_write_interrupt(audio_dev);
alt_up_audio_disable_read_interrupt(audio_dev);
alt_irq_register(AUDIO_0_IRQ, (void *)audio_dev, audio_isr);
alt_up_audio_enable_write_interrupt(audio_dev);
}
I don't remember how well that worked. Well enough to deserve a commit, but it was still a test, so don't give it too much importance. My final code was way too complicated to present here.
Hopefully, this is enough to get you started on the right track, which is to use Altera's IP. These IP are clear-source AFAIR, so if you don't want the NIOS, it should be simpler to start from their source than from scratch.
Probably you might require 3 modules, clock generator, audio configuration and audio serializer and deserializer. You no need to go for NIOS II based design. Plz check the altera lab experiment to understand how it works.
experiment link - https://www.altera.com/support/training/university/materials-lab-exercises.html#Digital-Logic-Exercises
pdf link - ftp://ftp.altera.com/up/pub/Altera_Material/Laboratory_Exercises/Digital_Logic/DE2-115/vhdl/lab12_VHDL.pdf.
also check for demo files
I am new to Linux and learing about how Linux comes to know about the avaible Physical Mmeory .I came to know there are some BIOS system call int 0x15 which willl gives you E20 memory Map.
Now I find a piece of code where its says its defination for converting EFI memory map to E820 Memory map.What does above mean??
Is it meant underlaying motherboard firmware is EFI based but since this code runs on x86 we need to convert it to E820 Memory Map
If so ,does x86 knows only about E820 memory maps??
What is difference between E820 and EFI memory maps??
Looking forward to get detailed answer on same.
In both cases, what you have is your firmware (BIOS or EFI) which is responsible for detecting what memory (and how much) is actually physically plugged in, and the operating system which needs to know this information in some format.
Is it meant underlaying motherboard firmware is EFI based but since this code runs on x86 we need to convert it to E820 Memory Map
Your confusion here is that EFI and x86 are incompatible - they aren't. EFI firmware has its own mechanisms for reporting available memory - specifically, you can use the GetMemoryMap boot service (before you invoke ExitBootServices) to retrieve the memory map from the firmware. However, critically, this memory map is in the format the EFI firmware wishes to report (EFI_MEMORY_DESCRIPTOR) rather than E820. In this scenario, you would not also attempt int 15h, since you already have the information you need.
I suspect what the Linux kernel does is to use the E820 format as its internal representation of memory on the x86 architecture. However, when booting EFI, the kernel must use the EFI firmware boot services, but chooses the convert the answer it gets back to the E820 format.
This is not a necessary thing for a kernel you are writing to do. You simply need to know how memory is mapped.
It is also the case that some bootloaders will provide this information for you, for example GRUB. Part of the multiboot specification allows you to instruct the bootloader that it must provide this information to your kernel.
For more on this, the ever-useful osdev wiki has code samples etc. The relevant sections for getting memory maps from grub are here.
Further points:
The OS needs to understand what memory is mapped where for several reasons. One is to avoid using physical memory where firmware services reside, but the other is for communication with devices who share memory with the CPU. The video buffer is a common example of this.
Secondly, listing the memory map in EFI is not too difficult. If you haven't already discovered it, the UEFI shell that comes with some firmware has a memmap command to display the memory map. If you want to implement this yourself, a quick and dirty way to do that looks like this:
EFI_STATUS EFIAPI PrintMemoryMap(EFI_SYSTEM_TABLE* SystemTable)
{
EFI_STATUS status = EFI_SUCCESS;
UINTN MemMapSize = sizeof(EFI_MEMORY_DESCRIPTOR)*16;
UINTN MemMapSizeOut = MemMapSize;
UINTN MemMapKey = 0; UINTN MemMapDescriptorSize = 0;
UINT32 MemMapDescriptorVersion = 0;
UINTN DescriptorCount = 0;
UINTN i = 0;
uint8_t* buffer = NULL;
EFI_MEMORY_DESCRIPTOR* MemoryDescriptorPtr = NULL;
do
{
buffer = AllocatePool(MemMapSize);
if ( buffer == NULL ) break;
status = gBS->GetMemoryMap(&MemMapSizeOut, (EFI_MEMORY_DESCRIPTOR*)buffer,
&MemMapKey, &MemMapDescriptorSize, &MemMapDescriptorVersion);
Print(L"MemoryMap: Status %x\n", status);
if ( status != EFI_SUCCESS )
{
FreePool(buffer);
MemMapSize += sizeof(EFI_MEMORY_DESCRIPTOR)*16;
}
} while ( status != EFI_SUCCESS );
if ( buffer != NULL )
{
DescriptorCount = MemMapSizeOut / MemMapDescriptorSize;
MemoryDescriptorPtr = (EFI_MEMORY_DESCRIPTOR*)buffer;
Print(L"MemoryMap: DescriptorCount %d\n", DescriptorCount);
for ( i = 0; i < DescriptorCount; i++ )
{
MemoryDescriptorPtr = (EFI_MEMORY_DESCRIPTOR*)(buffer + (i*MemMapDescriptorSize));
Print(L"Type: %d PhsyicalStart: %lx VirtualStart: %lx NumberofPages: %d Attribute %lx\n",
MemoryDescriptorPtr->Type, MemoryDescriptorPtr->PhysicalStart,
MemoryDescriptorPtr->VirtualStart, MemoryDescriptorPtr->NumberOfPages,
MemoryDescriptorPtr->Attribute);
}
FreePool(buffer);
}
return status;
}
This is a reasonably straightforward function. GetMemoryMap complains bitterly if you don't pass in a large enough buffer, so we keep incrementing the buffer size until we have enough space. Then we loop and print. Be aware that sizeof(EFI_MEMORY_DESCRIPTOR) is in fact not the difference between structs in the output buffer - use the returned size calculation shown above, or you'll end up with a much larger table than you really have (and the address spaces will all look wrong).
It wouldn't be massively difficult to decide on a common format with E820 from this table.
I am interested in writing emulators like for gameboy and other handheld consoles, but I read the first step is to emulate the instruction set. I found a link here that said for beginners to emulate the Commodore 64 8-bit microprocessor, the thing is I don't know a thing about emulating instruction sets. I know mips instruction set, so I think I can manage understanding other instruction sets, but the problem is what is it meant by emulating them?
NOTE: If someone can provide me with a step-by-step guide to instruction set emulation for beginners, I would really appreciate it.
NOTE #2: I am planning to write in C.
NOTE #3: This is my first attempt at learning the whole emulation thing.
Thanks
EDIT: I found this site that is a detailed step-by-step guide to writing an emulator which seems promising. I'll start reading it, and hope it helps other people who are looking into writing emulators too.
Emulator 101
An instruction set emulator is a software program that reads binary data from a software device and carries out the instructions that data contains as if it were a physical microprocessor accessing physical data.
The Commodore 64 used a 6502 Microprocessor. I wrote an emulator for this processor once. The first thing you need to do is read the datasheets on the processor and learn about its behavior. What sort of opcodes does it have, what about memory addressing, method of IO. What are its registers? How does it start executing? These are all questions you need to be able to answer before you can write an emulator.
Here is a general overview of how it would look like in C (Not 100% accurate):
uint8_t RAM[65536]; //Declare a memory buffer for emulated RAM (64k)
uint16_t A; //Declare Accumulator
uint16_t X; //Declare X register
uint16_t Y; //Declare Y register
uint16_t PC = 0; //Declare Program counter, start executing at address 0
uint16_t FLAGS = 0 //Start with all flags cleared;
//Return 1 if the carry flag is set 0 otherwise, in this example, the 3rd bit is
//the carry flag (not true for actual 6502)
#define CARRY_FLAG(flags) ((0x4 & flags) >> 2)
#define ADC 0x69
#define LDA 0xA9
while (executing) {
switch(RAM[PC]) { //Grab the opcode at the program counter
case ADC: //Add with carry
A = X + RAM[PC+1] + CARRY_FLAG(FLAGS);
UpdateFlags(A);
PC += ADC_SIZE;
break;
case LDA: //Load accumulator
A = RAM[PC+1];
UpdateFlags(X);
PC += MOV_SIZE;
break;
default:
//Invalid opcode!
}
}
According to this reference ADC actually has 8 opcodes in the 6502 processor, which means you will have 8 different ADC in your switch statement, each one for different opcodes and memory addressing schemes. You will have to deal with endianess and byte order, and of course pointers. I would get a solid understanding of pointer and type casting in C if you dont already have one. To manipulate the flags register you have to have a solid understanding of bitwise operations in C. If you are clever you can make use of C macros and even function pointers to save yourself some work, as the CARRY_FLAG example above.
Every time you execute an instruction, you must advance the program counter by the size of that instruction, which is different for each opcode. Some opcodes dont take any arguments and so their size is just 1 byte, while others take 16-bit integers as in my MOV example above. All this should be pretty well documented.
Branch instructions (JMP, JE, JNE etc) are simple: If some flag is set in the flags register then load the PC to the address specified. This is how "decisions" are made in a microprocessor and emulating them is simply a matter of changing the PC, just as the real microprocessor would do.
The hardest part about writing an instruction set emulator is debugging. How do you know if everything is working like it should? There are plenty of resources for helping you. People have written test codes that will help you debug every instruction. You can execute them one instruction at a time and compare the reference output. If something is different, you know you have a bug somewhere and can fix it.
This should be enough to get you started. The important thing is that you have A) A good solid understanding of the instruction set you want to emulate and B) a solid understanding of low level data manipulation in C, including type casting, pointers, bitwise operations, byte order, etc.
RS-232 communication sometimes uses 9-bit bytes. This can be used to communicate with multiple microcontrollers on a bus where 8 bits are data and the extra bit indicates an address byte (rather than data). Inactive controllers only generate an interrupt for address bytes.
Can a Linux program send and receive 9-bit bytes over a serial device? How?
The termios system does not directly support 9 bit operation but it can be emulated on some systems by playing tricks with the CMSPAR flag. It is undocumented and my not appear in all implementations.
Here is a link to a detailed write-up on how 9-bit emulation is done:
http://www.lothosoft.ch/thomas/libmip/markspaceparity.php
9-bit data is a standard part of RS-485 and used in multidrop applications. Hardware based on 16C950 devices may support 9-bits, but only if the UART is used in its 950 mode (rather than the more common 450/550 modes used for RS-232).
A description of the 16C950 may be found here.
This page summarizes Linux RS-485 support, which is baked into more recent kernels (>=3.2 rc3).
9-bit data framing is possible even if a real world UARTs doesn't.
Found one library that also does it under Windows and Linux.
See http://adontec.com/9-bit-serial-communication.htm
basically what he wants is to output data from a linux box, then send it on let's say a 2 wire bus with a bunch of max232 ic's -> some microcontroller with uart or software rs232 implementation
one can leave the individual max232 level converter's away as long as there are no voltage potency issues between the individual microcontrollers (on the same pcb, for example, rather than in different buildings ;) up until the maximum output (ttl) load of the max232 (or clones, or a resistor and invertor/transistor) ic.
can't find linux termios settings for MARK or SPACE parity (Which i'm sure the hardware uarts actually do support, just not linux tty implementation), so we shall just hackzor the actual parity generation a bit.
8 data bits, 2 stop bits is the same length as 8 databits, 1 parity bit, 1 stop bit. (where the first stopbit is a logic 1, negative line voltage).
one would then use the 9th bit as an indicator that the other 8 bits are the address of the individual or group of microcontrollers, which then take the next bytes as some sort of command, or data, as well, they are 'addressed'.
this provides for an 8 bit transparant, although one way traffic, means to address 'a lot of things' (256 different (groups of) things, actually ;) on the same bus. it's one way, for when one would want to do 2 way, you'd need 2 wire pairs, or modulate at multiple frequencies, or implement colission detection and the whole lot of that.
PIC microcontrollers can do 9 bit serial communication with ehm 'some trickery' (the 9th bit is actually in another register ;)
now... considering the fact that on linux and the likes it is not -that- simple...
have you considered simply turning parity on for the 'address word' (the one in which you need 9 bits ;) and then either setting it to odd or even, calculate it so that the right one is chosen to make the 9th (parity) bit '1' with parity on and 8 bit 'data', then turn parity back off and turn 2 stop bits on. (which still keeps a 9 bit word length in as far as your microcontroller is concerned ;)... it's a long time ago but as far as i recall stop bits are just as long as data bits in the timing of things.
this should work on anything that can do 8 bit output, with parity, and with 2 stop bits. which includes pc hardware and linux. (and dos etc)
pc hardware also has options to just turn 'parity' on or off for all words (Without actually calculating it) if i recall correctly from 'back in the days'
furthermore, the 9th bit the pic datasheet speaks about, actually IS the parity bit as in RS-232 specifications. just that you're free to turn it off or on. (on PIC's anyway - in linux it's a bit more complicated than that)
(nothing a few termios settings on linux won't solve i think... just turn it on and off then... we've made that stuff do weirder things ;)
a pic microcontroller actually does exactly the same, just that it's not presented like 'what it actually is' in the datasheet. they actually call it 'the 9th bit' and things like that. on pc's and therefore on linux it works pretty much the same way tho.
anyway if this thing should work 'both ways' then good luck wiring it with 2 pairs or figuring out some way to do collission detection, which is hell a lot more problematic than getting 9 bits out.
either way it's not much more than an overrated shift register. if the uart on the pc doesn't want to do it (which i doubt), just abuse the DTR pin to just shift out the data by hand, or abuse the printer port to do the same, or hook up a shift register to the printer port... but with the parity trick it should work fine anyway.
#include<termios.h>
#include<stdio.h>
#include<sys/types.h>
#include<sys/stat.h>
#include<fcntl.h>
#include<unistd.h>
#include<stdint.h>
#include<string.h>
#include<stdlib.h>
struct termios com1pr;
int com1fd;
void bit9oneven(int fd){
cfmakeraw(&com1pr);
com1pr.c_iflag=IGNPAR;
com1pr.c_cflag=CS8|CREAD|CLOCAL|PARENB;
cfsetispeed(&com1pr,B300);
cfsetospeed(&com1pr,B300);
tcsetattr(fd,TCSANOW,&com1pr);
};//bit9even
void bit9onodd(int fd){
cfmakeraw(&com1pr);
com1pr.c_iflag=IGNPAR;
com1pr.c_cflag=CS8|CREAD|CLOCAL|PARENB|PARODD;
cfsetispeed(&com1pr,B300);
cfsetospeed(&com1pr,B300);
tcsetattr(fd,TCSANOW,&com1pr);
};//bit9odd
void bit9off(int fd){
cfmakeraw(&com1pr);
com1pr.c_iflag=IGNPAR;
com1pr.c_cflag=CS8|CREAD|CLOCAL|CSTOPB;
cfsetispeed(&com1pr,B300);
cfsetospeed(&com1pr,B300);
tcsetattr(fd,TCSANOW,&com1pr);
};//bit9off
void initrs232(){
com1fd=open("/dev/ttyUSB0",O_RDWR|O_SYNC|O_NOCTTY);
if(com1fd>=0){
tcflush(com1fd,TCIOFLUSH);
}else{printf("FAILED TO INITIALIZE\n");exit(1);};
};//initrs232
void sendaddress(unsigned char x){
unsigned char n;
unsigned char t=0;
for(n=0;n<8;n++)if(x&2^n)t++;
if(t&1)bit9oneven(com1fd);
if(!(t&1))bit9onodd(com1fd);
write(com1fd,&x,1);
};
void main(){
unsigned char datatosend=0x00; //bogus data byte to send
initrs232();
while(1){
bit9oneven(com1fd);
while(1)write(com1fd,&datatosend,1);
//sendaddress(223); // address microcontroller at address 223;
//write(com1fd,&datatosend,1); // send an a
//sendaddress(128); // address microcontroller at address 128;
//write(com1fd,&datatosend,1); //send an a
}
//close(com1fd);
};
somewhat works.. maybe some things the wrong way around but it does send 9 bits. (CSTOPB sets 2 stopbits, meaning that on 8 bit transparant data the 9th bit = 1, in addressing mode the 9th bit = 0 ;)
also take note that the actual rs232 line voltage levels are the other way around from what your software 'reads' (which is the same as the 'inverted' 5v ttl levels your pic microcontroller gets from the transistor or inverter or max232 clone ic). (-19v or -10v (pc) for logic 1, +19/+10 for logic 0), stop bits are negative voltage, like a 1, and the same lenght.
bits go out 0-7 (and in this case: 8 ;)... so start bit -> 0 ,1,2,3,4,5,6,7,
it's a bit hacky but it seems to work on the scope.
Can a Linux program send and receive 9-bit bytes over a serial device?
The standard UART hardware (8251 etc.) doesn't support 9-bit-data modes.
I also made complete demo for 9-bit UART emulation (based on even/odd parity). You can find it here.
All sources available on git.
You can easily adapt it for your device. Hope you like it.