Map stdin from second keyboard to a specific program / tty - linux

I have a program (a bash script actually - console only) that scans or makes copies, etc based on user input. It asks questions such as how many copies would you like to make, etc and then scans the document, and prints it to another printer. The program runs in a loop so it's always there when a user passes by, and using a keyboard or number pad you can easily operate it. It basically makes a simple scanner/printer combo into a complex multifunction device.
I can leave it running on a dedicated system just fine, but to save electricity and resources, I would love to have it run on a computer someone else is already using. There is a user who has a laptop on the same desk as the scanner, and I was wanting to have her be able to do her thing in Xorg, as per usual, but have this little program running on an external monitor. That part is easy, but separating input is not. First of all the window has to be in focus, and then any input from the laptop keyboard OR usb keyboard is sent to the program, obviously.
I can think of one way to do this: using virtualbox, I can run a virtual machine without X, have it permanently ssh into the host OS (to which the usb scanner is connected) and I have virtualbox grab the usb keyboard input. But that seems excessive.
Does anyone know of a way to map input from a specific keyboard to a specific program or tty?

Related

What is the tty subsystem for?

By now I have now spent at least 10 hours trying to get my head around the famous blog post by Linus Akesson, and Im still struggling. So let me ask my doubts about tty/ptty as a series of short questions.
1) Is the tty/ptty in user space or kernel space?
2) What is tty/ptty's connection to devices or drivers or some numbering or something?
3) The tty seems to be linked to something called the controlling terminal of a process, What is the relation and is every process related to a terminal?
4) On the whole I still dont understand where the heck this terminal concept fits in. A process wants to read something from the stdio, cant it simply do it from the required device file. What exactly is the problem that the tty intends to take care of?
5) I read somewhere that there are attempts to move the tty from the userspace to the kernel space. Is the tty simply a historical residue than a strong design feature.??
A clarification (which might answer some of your questions):
I think you meant pty (and not ptty) which is pseudo-tty/pseudo-terminal.
A tty (/dev/ttyx) - stands for teletype - is the original terminals (used a line printer for output and a keyboard for input!). A terminal is basically just a user interface device that uses text for input and output.
A pty (/dev/pty/n) is a pseudo-terminal - it's a software implementation that appears to the attached program like a terminal, but instead of communicating directly with a "real" terminal, it transfers the input and output to another program. It's the end point of telnet/SSH or even the GNOME terminal.
For example, when you ssh into a remote machine and run ls, the ls output is sent to a pseudo-terminal, the other side of which is attached to the SSH daemon.
EDIT:
As far as I know, the tty and so pty, are usermode. BUT they represent terminal-driver. What I mean is: the device file /dev/tty1 is the first virtual console. Most code lives in drivers/char, in the files tty_io.c and n_tty.c and vt.c (kernel source). In contrast to character devices in order to open those files tty_open routine is called, and trust me, it's way messier than opening a character device...
Tty/pty stands for terminal drivers mentioned above but they stands for serial ports (the"numbering" you said). I know very little about it so I don't want to say incorrect data... but you can search the net about it (or someone else can continue from here)
EDIT2:
You have changed the question so now it seems like I spoke out of context...
Anyway, tty has many different roles even nowday. Terminal driver is the way user-kernel can "communicate". There are some techniques such as terminal drivers, character device etc.
If you still have a question please comment and don't change the whole post....

What is the relation between a terminal emulator and a TTY device?

I found this awesome text explaining a lot about TTY devices. It focuses in the relation between a TTY device and a shell (and its spawned jobs). But it says little about the relation between the terminal emulator and the TTY device; and now I'm wondering about that. I googled, but I could not find the answers...
1) What kind of input logic is the terminal emulator responsible for? It just sends each character code (received by window event) to the TTY device, or it does a more complicated processing before/during the transmission to the TTY? And how these character codes are sent to the TTY device? Via file?
2) After a foreground process calling write() to the TTY device file, a.k.a. stdout/stderr, what happens? How this data reaches the terminal emulator process, so it can be rendered? Again, via file?
3) Is the terminal emulator responsible for "allocating" a TTY device? TTY devices can be created "on the fly" by the kernel, or is there a limited number of available TTY devices the kernel can manage?
First of all, answer yourself what a terminal is.
Historically, terminal devices where some dumb devices that transformed output characters from programs to visible drawings in some output device (a printer or a cathode ray tube) and send input characters to programs (produced at a keyboard locally) through a serial line.
From that perspective, a terminal emulator is some software application, normally running at a computer that has not been designed to act as a terminal device to make it behave as such. Normally, this means it will be receiving input from a serial line to output to the user (for example in a specific window on the screen) and will process user input on that window and send it to a remote computer for processing in the program running there.
By contrast, tty lines were serial lines used to send and receive characters. In UNIX, they used to have a common driver, that did some processing to the received characters from the actual terminal. For example, the unix driver collects all characters, allowing some editing via the use of the backspace key, and only make this data available to the program running on the computer after the user (the terminal) has sent the RETURN key.
Some time ago, the need to have virtual terminal devices (devices that don't have an actual terminal behind, but another program instead) where needed to run several programs that used to program the connecting device (for example, to not echo password characters back to the terminal, or to do character by character input, instead of line by line) and to allow the driving programs in the virtual TTY program to act upon these programmings.
Virtual terminal devices come in pairs and terminal emulating programs get the master side of the virtual terminal, running the actual program in the slave part (for example a login shell to allow a windows based pseudoterminal)
Now the answers to your questions:
1) Terminal input logic is managed at the virtual terminal driver of the slave device, as if the program running on it has complete control of character mapping or line/raw input. By the way, the program attached to the master side, only gets the raw characters, without any interpretation, so it can, for example, send a Control-C character to interrupt the program running on the slave side.
2) when the program running at the slave side does a write, this write goes through the tty driver, that makes all the asumptions of line-discipline (for example to add a CR char before any LF character to make a CRLF sequence, in case of terminal cooked mode of operation) the program running on the master side will receive the raw characters (even, for example a Ctrl-C written by the program) On input, tty device converts input character (case of a Ctrl-C) and sends the proper signal to the group of processes attached to that pseudo terminal.
3) Historically, the terminals appeared as device pairs (as one specific kind of terminal character device driver), and as such, they had inodes with major/minor number pairs. This limited their number to a properly configured administration value. Nowadays, linux, for example, allows dynamic allocation of devices, making it possible to allocate device pairs dynamically. But the maximum number continues to be bounded (for eficiency and implementation reasons)

Mac Script to count how long before Bluetooth becomes visible

I'm trying to speed up a process on a linux machine. The end result of this operation is a bluetooth that then turns on.
I want to be able to run a script on my Mac than counts how long it takes before a bluetooth device name (that I provide to it) becomes visible.
Can someone please guide me to how I would go about this?
Scenario
On my linux machine I start some process that ends with the bluetooth to turn on.
At the same time, on my Mac, I press "enter" which starts counting until a bluetooth by the name "TEST_DEVICE" becomes available, at which put the system will spit out the time.

Capturing Global Keyboard Events On Linux With NodeJS

I have a headless Debian ARM machine that I'm running Node on. The device has hard buttons that are mapped to normal keyboard events using gpio-keys.
My goal is to capture the global events from both the hard buttons as well as any attached keyboards in Node. I need a solution that can capture the keydown/keyup events independently of the terminal that it's run in (it will be run over an SSH session). It doesn't have to be cross-platform, as long as it works on ARM Debian I'll accept it.
I am imagining something reading directly from whatever sysfs attributes are necessary, but that's not a requirement.
Can anyone help me on this? I've been stuck for a while.
One of the device files /dev/input/event* will represent the gpio-keys device. You can figure out which one in a number of ways; one easy one is to look at the contents of the uevent file for the device, e.g. /sys/class/input/event0/device/uevent. It'll contain a number of useful key-value properties.
Once you've figured out which device you want, you can open and read from it. It'll return a stream of struct input_events, as defined in <linux/input.h>. These events will correspond to presses and releases for each of your buttons.
You may also want to take a look for existing solutions for at least part of the problem, such as node-keyboard: https://github.com/Bornholm/node-keyboard

Is it possible to send SysRq commands with a barcode scanner?

We have a kiosk application that runs matchbox on top of linux, and has only a barcode scanner for input (no keyboard). It would be great to be able to print a barcode that--when scanned--sent commands like SysRq R etc, so that one could REISUB without having to disassemble the unit.
If there is not an existing way, could you patch the barcode driver to interpret a certain set of symbols and initiate the sequence?
Why do you need SysRq? Is the machine actually wedging itself or are you just trying to reboot cleanly? Why not just put a "reboot" command into whatever protocol you're using? What's wrong with simply doing a hard power cycle?

Resources