Writing data over RxTx using usbserial? - linux

I'm using the RxTx library over usbserial on a Linux distro. The RxTx lib seems to behave quite differently (in a bad way) than how it works over serial.
One of my biggest problems is that the RxTx SerialPortEvent.OUTPUT_BUFFER_EMPTY does not work on linux over usb serial.
How do I know when I should write to the stream? Any indicators I might have missed?
So far my experience with writing and reading concurrently have not been great. Does anyone know if I should lock the DATA_AVAILABLE handler from being invoked while I'm writing on the stream? Or RxTx accepts concurrent read/writes?

(perhaps slightly off-topic, but here goes)
I'm not familiar with that particular library, but I can assure you from dire experience (I work in the security systems (as in: hardware security devices) business, where RS-232 is heavily used) that not all USB-serial converters are born equal. Many such devices so not properly emulate all RS-232 lines, and many don't even handle any comms without flow control. Before blaming the library, try to confirm that the hardware actually does what it's supposed to do.
Without wanting to endorse a particular product or brand, the best (as in: least buggy) USB-serial converter I have come across in years is the USA-19HS.

Using RxTx over usb-to-serial you can't set notifyOnOutput to true otherwise it locks up completely.
I've learned this the hard way. This problem is documented on a few web sites over the internet.
I'm running it on Linux and I believe that this is a Linux only issue, although I can't confirm that.
As for the link you've given me... I've seen the SimpleReader and SimpleWriter examples, but these don't represent a real world application. It is not multi-threaded, assumes a read has the full data it needs instead of buffering reads, etc.
Thanks,
Jeach!

Related

Linux Kernel API: meaning and stability of nodes under `/sys/devices/pci...` tree in sysfs

The essence of my question is almost painfully simple:
Given my current hardware, would the design of Linux sysfs allow me to expect that the device at /sys/devices/pci0000:00/0000:00:14.0/usb1/1-6 will always be the same device at that same location every time that I boot?
I'm 99% certain that the answer is "yes", but getting an expert to quickly weigh in seems prudent, to make sure I don't do anything stupid.
The inverse way to ask my question would be:
Is there any non-determinism in how the kernel enumerates pci and usb hardware at boot? (Assume the hardware is always fixed in advance of the boot.)
Supporting details:
Kernel in use on this hardware is 5.4.0-58-generic
This PCI and USB connection is all inside of the custom device enclosure, and is part of the "permanent" hardware design. There is no possibility of "hot-plugging" or unplugging or any end-user intervention with regard to changing/reconfiguring these connections.
The actual device that I care about, located at usb1/1-6, is an stm32f103 microcontroller, in case that matters. (I suspect it does not matter.)
The hardware design is fixed, and made by someone other than myself. I am writing software (mostly high-level GUI software) that runs on this device. As you can probably guess, my kernel knowledge is a bit weak.
As I said, I'm fairly convinced that the structure of the path /sys/devices/pci0000:00/0000:00:14.0/usb1/1-6 is derived entirely from the real-world structure of the hardware, but am looking for confirmation.
I've been reading about sysfs (such as here: https://www.kernel.org/doc/html/latest/admin-guide/sysfs-rules.html), but I still feel a lack of stumbling into any single unambiguous sentence which would resolve my uncertainty.

How to turn off a GPIO port on BeagleBone Black Wireless

My task is to create a program to open and close an electronic valve that is plugged into GPIO ports on my BeagleBone, by using TTL signals.
Questions:
Can I do this?
How do I make an executable file to do this?
Can someone refer me to documentation on this?
Am I going about this in the wrong way?
Thank you.
P.S. If you couldn't already tell I am very new to this.
Yes
There are many ways. It's actually a pretty standard Linux computer and you can use any of a "million" different programming languages to achieve this. This also means you don't have to look for "Beaglebone" specific instructions (beyond the GPIO info below), but your problem is just "How do I write a program that can write text to a file on Linux?". Bonus: This sounds easy and it is easy!
Yes, take a look here for the hardware specific part:
https://github.com/adafruit/adafruit-beaglebone-io-python/issues/157
It describes fairly well both the new and the old sysfs interfaces you can use to manipulate GPIOs.
Depending on the language of your choice, there may already be bindings or a library to abstract this.
No (only based on the information you provided, are there other requirements?)
We all were new at this at some point, don't worry.
Sidenote: It's generally a good idea to make sure that you are running the latest firmware. In case of the BB-Family you can find them here: http://beagleboard.org/latest-images

Why is DirectFB not more widely used in GNU/Linux? Are there crippling limitations to it that don't exist in X11?

As far as I understand, DirectFB offers hardware acceleration for many kinds of graphics cards. Additionally, it's smaller, faster, and uses up less memory than X11. Why then, is it not more mainstream than it is now?
Here's what I'm really unsure about: Do common GTK+/Qt programs need to be ported to it? On the DirectFB site, there's a project for porting Firefox to it. Why is that even necessary, if GTK+ has the ability to use DirectFB directly? The way I (probably incorrectly) understand it, is that Firefox should output to GTK+, which should output to DirectFB, which should output to the hardware. Please correct me if I'm wrong about that.
If you're stressing about X as a source of overhead on a modern Linux system you probably aren't looking in the right place. X was designed a really long time ago for computers much less powerful than a modern cell phone.
If you look at "top" and see X using memory, there's a lot of work to do to figure out the actual X overhead. There are memory maps that aren't "real" memory, and there are resources (such as big blocks of pixels) allocated on behalf of apps. Bottom line the memory shown for X in top isn't what one might think.
People also hear that X uses the "network" and think this is going to be a performance bottleneck. "Network" here means local UNIX domain socket, which has negligible overhead on modern Linux. Things that would bottleneck on the network, there are X extensions to make fast (shared memory pixmaps, DRI, etc.). Threads in-process wouldn't necessarily be faster than the X socket, because the bottlenecks have more to do with the inherent problem of coordinating multiple threads or processes accessing the same hardware, than with the minimal overhead of local sockets.
The multi-process setup has a lot of advantages, such as being much harder to crash. See Google Chrome for example, using multiple processes to be more robust - and it turns out, also to run fast. Less processes does not necessarily mean more modern.
There are many reasons apps using GTK don't transparently port to DirectFB. For Firefox, one is that it uses X directly sometimes. Also, some toolkit-independent stuff such as the browser plugin interface uses X directly. Flash plugin would not work on DirectFB for example. Even apps that don't use X directly would often assume the normal X-based desktop environment exists (GNOME, etc.).
Another issue with replacing X is driver support, where both of the better graphics cards (NVidia, ATI) have proprietary drivers that are a good bit more capable than the free drivers, and those proprietary drivers are tied to X.
And of course there's migration path. If you have hundreds of apps using X and no clear end-user downside to X, nobody is going to switch to something where no apps work. Most likely, the solution here would be a rootless X server running on a new window system, so old apps still work.
Old is not always bad. X was very well-designed by smart people, and that has allowed it to evolve and change and still work many years later.
Anyway all a long way of saying, basically switching away from X is tons of effort, it really works fine, and "works fine" has never applied to any of the alternatives (at least if you want to be able to run most apps on most hardware).
There are issues with X - such as the impossibility of doing an atomic screen update, something the Wayland project is looking at - but most of the issues are really cosmetic for users (e.g. non-atomic updates) or cosmetic for developers (old deprecated extensions and the like). It just isn't true that one could drop X and magically have something much smaller and faster. That's mostly based on people speculating that "old" and "uses network" must be slow and bloated, but again, X was designed for really really crappy hardware. I used to run X (and Emacs!) fine on my 386 with maybe 8 megs of RAM or something like that.
x11 is much more than just a way to draw to a screen - it's an entire network-capable desktop protocol suite. DirectFB does not intend to replace x11 (as far as I know) but rather runs parallel to it. That is, DirectFB strives to be a lightweight "host" for applications needing access to basic input and graphic output. It is possible that an X server (the server in X is the thing that displays things :-) is written to use DirectFB.
GTK on DirectFB is different than GTK on X11
Simple, because DirectFB doesn't solve any problem. For embedded systems it's fine, but for desktop, you lose a lot and don't gain really anything.
DirectFB was designed for embedded systems, which have small memory footprint. It allows applications to talk directly to video hardware through a direct API, speeding up and simplifying graphic operations.
It is often used by games and embedded systems developers to circumvent the overhead of a full X Window System server implementation.
http://elinux.org/DirectFB
X11 is far more portable than DirectFB. An X11 app can run on Linux, BSD, Solaris, AIX, HP-UX, MacOS X, Windows (via Cygwin or Exceed), and many more platforms. DirectFB is pretty much Linux-only.
With XDirectFB there is a rootless X Server using DirectFB.

How to convince my co-worker the linux kernel code is re-entrant?

Yeah I know ... Some people are sometimes hard to convince of what sounds natural to the rest of us, an I need your help right now SO community (or I'll go postal soon ..)
One of my co-worker is convinced the linux kernel code is not re-entrant as he reads it somewhere last time he get insterested in it, likely 7 years ago. Probably its reading was right at that time, remember that multi core architecture was not much widespread some time ago and linux project at its begining or so was not totally well writen and fully fledged with all fancy features.
Today is different. It's obvious that calling the same system call from different processes running in parallel on the same architecture won't lead to undefined behavior. Linux kernel is widespread now, and known for its reability even though running on multicore architectures.
That is my argument for now. But what would be yours to prove that objectively ?
I was thinking to show him off some function in the linux kernel (on lxr website ) as the mutex_lock() system call. Eveything is tuned to get it work in concurrent environnement. But the code could be not that obvious for newbie (as I am).
Please help me.. ;-)
Search the kernel mailing list archive for "BKL". That stands for "Big Kernel Lock", which is what used to be used to prevent problems. A lot of work has been put into breaking it up into pieces, to allow reentry as long different parts of the kernel are used by different processes. Most recent mentions of "BKL" (at least that I've noticed) have basically referred to somebody trying to make his own life easy by locking more than somebody else approved of, at which point they frequently say something about "returning to the days of the BKL", or something on that order.
The easiest way to prove that multiple CPUs can execute in the kernel simultaneously would be to write a program that does a lot of work in-kernel (for example, looks up long pathnames in a tight loop), then run two copies of it at the same time on a dual-core machine and show that the "system" percentage in top goes above 50%.
At the risk of being snarky: why not just read the code? If neither of you are expert enough to follow the code through an interrupt handler and into some subsystem or another where you can read out the synchronization code, then ... why bother? Isn't this just a dancing on the head of a pin argument? It's like a creationist demanding "proof" of evolution when they aren't interested in learning any biology.
Maybe you should have your friend prove Linux is not reentrant. Burden should not be on you to prove this.

How do emulators work and how are they written? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do emulators work? When I see NES/SNES or C64 emulators, it astounds me.
Do you have to emulate the processor of those machines by interpreting its particular assembly instructions? What else goes into it? How are they typically designed?
Can you give any advice for someone interested in writing an emulator (particularly a game system)?
Emulation is a multi-faceted area. Here are the basic ideas and functional components. I'm going to break it into pieces and then fill in the details via edits. Many of the things I'm going to describe will require knowledge of the inner workings of processors -- assembly knowledge is necessary. If I'm a bit too vague on certain things, please ask questions so I can continue to improve this answer.
Basic idea:
Emulation works by handling the behavior of the processor and the individual components. You build each individual piece of the system and then connect the pieces much like wires do in hardware.
Processor emulation:
There are three ways of handling processor emulation:
Interpretation
Dynamic recompilation
Static recompilation
With all of these paths, you have the same overall goal: execute a piece of code to modify processor state and interact with 'hardware'. Processor state is a conglomeration of the processor registers, interrupt handlers, etc for a given processor target. For the 6502, you'd have a number of 8-bit integers representing registers: A, X, Y, P, and S; you'd also have a 16-bit PC register.
With interpretation, you start at the IP (instruction pointer -- also called PC, program counter) and read the instruction from memory. Your code parses this instruction and uses this information to alter processor state as specified by your processor. The core problem with interpretation is that it's very slow; each time you handle a given instruction, you have to decode it and perform the requisite operation.
With dynamic recompilation, you iterate over the code much like interpretation, but instead of just executing opcodes, you build up a list of operations. Once you reach a branch instruction, you compile this list of operations to machine code for your host platform, then you cache this compiled code and execute it. Then when you hit a given instruction group again, you only have to execute the code from the cache. (BTW, most people don't actually make a list of instructions but compile them to machine code on the fly -- this makes it more difficult to optimize, but that's out of the scope of this answer, unless enough people are interested)
With static recompilation, you do the same as in dynamic recompilation, but you follow branches. You end up building a chunk of code that represents all of the code in the program, which can then be executed with no further interference. This would be a great mechanism if it weren't for the following problems:
Code that isn't in the program to begin with (e.g. compressed, encrypted, generated/modified at runtime, etc) won't be recompiled, so it won't run
It's been proven that finding all the code in a given binary is equivalent to the Halting problem
These combine to make static recompilation completely infeasible in 99% of cases. For more information, Michael Steil has done some great research into static recompilation -- the best I've seen.
The other side to processor emulation is the way in which you interact with hardware. This really has two sides:
Processor timing
Interrupt handling
Processor timing:
Certain platforms -- especially older consoles like the NES, SNES, etc -- require your emulator to have strict timing to be completely compatible. With the NES, you have the PPU (pixel processing unit) which requires that the CPU put pixels into its memory at precise moments. If you use interpretation, you can easily count cycles and emulate proper timing; with dynamic/static recompilation, things are a /lot/ more complex.
Interrupt handling:
Interrupts are the primary mechanism that the CPU communicates with hardware. Generally, your hardware components will tell the CPU what interrupts it cares about. This is pretty straightforward -- when your code throws a given interrupt, you look at the interrupt handler table and call the proper callback.
Hardware emulation:
There are two sides to emulating a given hardware device:
Emulating the functionality of the device
Emulating the actual device interfaces
Take the case of a hard-drive. The functionality is emulated by creating the backing storage, read/write/format routines, etc. This part is generally very straightforward.
The actual interface of the device is a bit more complex. This is generally some combination of memory mapped registers (e.g. parts of memory that the device watches for changes to do signaling) and interrupts. For a hard-drive, you may have a memory mapped area where you place read commands, writes, etc, then read this data back.
I'd go into more detail, but there are a million ways you can go with it. If you have any specific questions here, feel free to ask and I'll add the info.
Resources:
I think I've given a pretty good intro here, but there are a ton of additional areas. I'm more than happy to help with any questions; I've been very vague in most of this simply due to the immense complexity.
Obligatory Wikipedia links:
Emulator
Dynamic recompilation
General emulation resources:
Zophar -- This is where I got my start with emulation, first downloading emulators and eventually plundering their immense archives of documentation. This is the absolute best resource you can possibly have.
NGEmu -- Not many direct resources, but their forums are unbeatable.
RomHacking.net -- The documents section contains resources regarding machine architecture for popular consoles
Emulator projects to reference:
IronBabel -- This is an emulation platform for .NET, written in Nemerle and recompiles code to C# on the fly. Disclaimer: This is my project, so pardon the shameless plug.
BSnes -- An awesome SNES emulator with the goal of cycle-perfect accuracy.
MAME -- The arcade emulator. Great reference.
6502asm.com -- This is a JavaScript 6502 emulator with a cool little forum.
dynarec'd 6502asm -- This is a little hack I did over a day or two. I took the existing emulator from 6502asm.com and changed it to dynamically recompile the code to JavaScript for massive speed increases.
Processor recompilation references:
The research into static recompilation done by Michael Steil (referenced above) culminated in this paper and you can find source and such here.
Addendum:
It's been well over a year since this answer was submitted and with all the attention it's been getting, I figured it's time to update some things.
Perhaps the most exciting thing in emulation right now is libcpu, started by the aforementioned Michael Steil. It's a library intended to support a large number of CPU cores, which use LLVM for recompilation (static and dynamic!). It's got huge potential, and I think it'll do great things for emulation.
emu-docs has also been brought to my attention, which houses a great repository of system documentation, which is very useful for emulation purposes. I haven't spent much time there, but it looks like they have a lot of great resources.
I'm glad this post has been helpful, and I'm hoping I can get off my arse and finish up my book on the subject by the end of the year/early next year.
A guy named Victor Moya del Barrio wrote his thesis on this topic. A lot of good information on 152 pages. You can download the PDF here.
If you don't want to register with scribd, you can google for the PDF title, "Study of the techniques for emulation programming". There are a couple of different sources for the PDF.
Emulation may seem daunting but is actually quite easier than simulating.
Any processor typically has a well-written specification that describes states, interactions, etc.
If you did not care about performance at all, then you could easily emulate most older processors using very elegant object oriented programs. For example, an X86 processor would need something to maintain the state of registers (easy), something to maintain the state of memory (easy), and something that would take each incoming command and apply it to the current state of the machine. If you really wanted accuracy, you would also emulate memory translations, caching, etc., but that is doable.
In fact, many microchip and CPU manufacturers test programs against an emulator of the chip and then against the chip itself, which helps them find out if there are issues in the specifications of the chip, or in the actual implementation of the chip in hardware. For example, it is possible to write a chip specification that would result in deadlocks, and when a deadline occurs in the hardware it's important to see if it could be reproduced in the specification since that indicates a greater problem than something in the chip implementation.
Of course, emulators for video games usually care about performance so they don't use naive implementations, and they also include code that interfaces with the host system's OS, for example to use drawing and sound.
Considering the very slow performance of old video games (NES/SNES, etc.), emulation is quite easy on modern systems. In fact, it's even more amazing that you could just download a set of every SNES game ever or any Atari 2600 game ever, considering that when these systems were popular having free access to every cartridge would have been a dream come true.
I know that this question is a bit old, but I would like to add something to the discussion. Most of the answers here center around emulators interpreting the machine instructions of the systems they emulate.
However, there is a very well-known exception to this called "UltraHLE" (WIKIpedia article). UltraHLE, one of the most famous emulators ever created, emulated commercial Nintendo 64 games (with decent performance on home computers) at a time when it was widely considered impossible to do so. As a matter of fact, Nintendo was still producing new titles for the Nintendo 64 when UltraHLE was created!
For the first time, I saw articles about emulators in print magazines where before, I had only seen them discussed on the web.
The concept of UltraHLE was to make possible the impossible by emulating C library calls instead of machine level calls.
Something worth taking a look at is Imran Nazar's attempt at writing a Gameboy emulator in JavaScript.
Having created my own emulator of the BBC Microcomputer of the 80s (type VBeeb into Google), there are a number of things to know.
You're not emulating the real thing as such, that would be a replica. Instead, you're emulating State. A good example is a calculator, the real thing has buttons, screen, case etc. But to emulate a calculator you only need to emulate whether buttons are up or down, which segments of LCD are on, etc. Basically, a set of numbers representing all the possible combinations of things that can change in a calculator.
You only need the interface of the emulator to appear and behave like the real thing. The more convincing this is the closer the emulation is. What goes on behind the scenes can be anything you like. But, for ease of writing an emulator, there is a mental mapping that happens between the real system, i.e. chips, displays, keyboards, circuit boards, and the abstract computer code.
To emulate a computer system, it's easiest to break it up into smaller chunks and emulate those chunks individually. Then string the whole lot together for the finished product. Much like a set of black boxes with inputs and outputs, which lends itself beautifully to object oriented programming. You can further subdivide these chunks to make life easier.
Practically speaking, you're generally looking to write for speed and fidelity of emulation. This is because software on the target system will (may) run more slowly than the original hardware on the source system. That may constrain the choice of programming language, compilers, target system etc.
Further to that you have to circumscribe what you're prepared to emulate, for example its not necessary to emulate the voltage state of transistors in a microprocessor, but its probably necessary to emulate the state of the register set of the microprocessor.
Generally speaking the smaller the level of detail of emulation, the more fidelity you'll get to the original system.
Finally, information for older systems may be incomplete or non-existent. So getting hold of original equipment is essential, or at least prising apart another good emulator that someone else has written!
Yes, you have to interpret the whole binary machine code mess "by hand". Not only that, most of the time you also have to simulate some exotic hardware that doesn't have an equivalent on the target machine.
The simple approach is to interpret the instructions one-by-one. That works well, but it's slow. A faster approach is recompilation - translating the source machine code to target machine code. This is more complicated, as most instructions will not map one-to-one. Instead you will have to make elaborate work-arounds that involve additional code. But in the end it's much faster. Most modern emulators do this.
When you develop an emulator you are interpreting the processor assembly that the system is working on (Z80, 8080, PS CPU, etc.).
You also need to emulate all peripherals that the system has (video output, controller).
You should start writing emulators for the simpe systems like the good old Game Boy (that use a Z80 processor, am I not not mistaking) OR for C64.
Emulator are very hard to create since there are many hacks (as in unusual
effects), timing issues, etc that you need to simulate.
For an example of this, see http://queue.acm.org/detail.cfm?id=1755886.
That will also show you why you ‘need’ a multi-GHz CPU for emulating a 1MHz one.
Also check out Darek Mihocka's Emulators.com for great advice on instruction-level optimization for JITs, and many other goodies on building efficient emulators.
I've never done anything so fancy as to emulate a game console but I did take a course once where the assignment was to write an emulator for the machine described in Andrew Tanenbaums Structured Computer Organization. That was fun an gave me a lot of aha moments. You might want to pick that book up before diving in to writing a real emulator.
Advice on emulating a real system or your own thing?
I can say that emulators work by emulating the ENTIRE hardware. Maybe not down to the circuit (as moving bits around like the HW would do. Moving the byte is the end result so copying the byte is fine). Emulator are very hard to create since there are many hacks (as in unusual effects), timing issues, etc that you need to simulate. If one (input) piece is wrong the entire system can do down or at best have a bug/glitch.
The Shared Source Device Emulator contains buildable source code to a PocketPC/Smartphone emulator (Requires Visual Studio, runs on Windows). I worked on V1 and V2 of the binary release.
It tackles many emulation issues:
- efficient address translation from guest virtual to guest physical to host virtual
- JIT compilation of guest code
- simulation of peripheral devices such as network adapters, touchscreen and audio
- UI integration, for host keyboard and mouse
- save/restore of state, for simulation of resume from low-power mode
To add the answer provided by #Cody Brocious
In the context of virtualization where you are emulating a new system(CPU , I/O etc ) to a virtual machine we can see the following categories of emulators.
Interpretation: bochs is an example of interpreter , it is a x86 PC emulator,it takes each instruction from guest system translates it in another set of instruction( of the host ISA) to produce the intended effect.Yes it is very slow , it doesn't cache anything so every instruction goes through the same cycle.
Dynamic emalator: Qemu is a dynamic emulator. It does on the fly translation of guest instruction also caches results.The best part is that executes as many instructions as possible directly on the host system so that emulation is faster. Also as mentioned by Cody, it divides the code into blocks ( 1 single flow of execution).
Static emulator: As far I know there are no static emulator that can be helpful in virtualization.
How I would start emulation.
1.Get books based around low level programming, you'll need it for the "pretend" operating system of the Nintendo...game boy...
2.Get books on emulation specifically, and maybe os development. (you won't be making an os, but the closest to it.
3.look at some open source emulators, especially ones of the system you want to make an emulator for.
4.copy snippets of the more complex code into your IDE/compliler. This will save you writing out long code. This is what I do for os development, use a district of linux
I wrote an article about emulating the Chip-8 system in JavaScript.
It's a great place to start as the system isn't very complicated, but you still learn how opcodes, the stack, registers, etc work.
I will be writing a longer guide soon for the NES.

Resources