How to write key input in assembly? - linux

I'm trying to make a keyboard driver for windows & linux as a project, I was looking to simulate the actual process of writing a key (meaning not using anything such as windows messages), and after move on to the waiting for input from keyboard which I found alot of tutorials for.
Anyone know hot to do this for Windows & Linux? (running intel proc win10 64bit & kali linux amd proc 64bit)

You cannot have the same driver on Windows and on Linux. You'll need to make two different, unrelated, programs and you have to design them differently (because Windows and Linux have different architectures for drivers).
BTW, on Linux, with a graphical desktop, a display server (such as Xorg or Wayland) is running. That server is the only program handling the physical keyboard. You might consider working with it.
The actual notion of keyboard driver is too broad to make a concrete sense. On Linux, you could patch the kernel, patch the display server, improve the window manager, etc... There is no need, and not much interest, in coding that stuff in assembler.
Notice that on Linux, with a graphical desktop, the keyboard layout is handled in the display server, not in kernel code (so the kernel is sending key events with keycodes close to scancodes, not characters; the Xorg server sends keyboard events with similar keycodes to e.g. the window manager). Read more about the X Window System protocols and architecture and e.g. EWMH. The graphical layers are very complex (both on Linux and on Windows), many millions of lines of code.

Related

How to create a Bootable GTK Application?

Hi i have an application written in GTK and i would like to make it into an bootable ISO file.
I have tried many options but have failed and being sent in many directions using cmake and make by following several tutorials which did not work.
Does anybody know how to create an bootable ISO file for / from an GTK based application on linux / ubuntu?
I am currently using ubuntu to develop the bootloading application yet i would prefer the GTK application to startup when the computer starts up, and have no operating system running if possible?
GTK requires an operating system kernel (a Linux kernel...) to be running, and some display server, e.g. Xorg.
So you need to actually make your custom Linux distribution.
I would prefer the GTK application to startup when the computer starts up, and have no operating system running
This is not possible
But you could study the source code of source based Linux distributions like Gentoo and work for several months to make your own Linux distribution.
You probably would need help and address many issues you did not even thought of (e.g. AZERTY keyboard layout, computers with only USB disks, laptops with only Wifi network connections, etc...)
Notice that Debian & Ubuntu can be configured to boot some (open source) GTK based installation procedure. I guess you could study in details their implementation (since it is open source)
It's not possible to boot a GTK application without operating system, as Basile Starynkevitch said.
However, you can use Linux to display only your GTK distribution, without any additional programs and I think it can be done easier, than Starynkevitch's method.
You can try to use the tool Systemback or similar to create a bootable live Linux distribution. Systemback is not maintained anymore but there is a github fork made by BluewhaleRobot that appears to be more up-to-date.
You can install a light Linux distribution, for example Xubuntu, and remove all unnecessary packages and programs. You can set the wallpaper, remove or leave the taskbar/menu start etc. Then, install your GTK application, add it to autorun and use Systemback's "Live system create" function.
The ISO image should be created and your program should be already installed in it with autorun.
It's not a perfect and stable solution, however, it seems to be the easiest way to achieve what you want.

Linux's system calls for GUI? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
I'll take x86 as an example as I am more aware of x86 stuff than ARM stuff. Also, I may get some information wrong as I've been doing some research on this question while answering. Feel free to correct me if I am wrong.
System booting
Some time ago, Linux used to boot with a legacy bootloader (GRUB legacy version). The GRUB bootloader would be started by the BIOS at 0x7c00 in RAM and then would read the kernel from the hard-disk. It would follow the multiboot specification. The multiboot specification mentions the state that the computer needs to be in before jumping to the kernel's entry point. The kernel would then launch a first process (init) that every process would be a child of.
Today, most Linux distributions boot with UEFI (with the option of legacy booting also available). A UEFI app is placed on the boot partition partionned as a GPT ESP (EFI System Partition). This EFI app is launched and then follows the Linux Boot Protocol to launch Linux. The init process was also replaced by systemd. Linux will thus launch systemd as the first process of the computer. Actually, as stated on the manpage for systemd:
systemd is usually not invoked directly by the user, but is
installed as the /sbin/init symlink and started during early
boot.
The process that will be started is thus /sbin/init but it is a symlink to systemd. The systemd process will then read several configuration files on the hard-disk called units. These units are often targets which specify several units to read. Targets are thus units which specify several units to read. At first systemd will read default.target which specifies several other units. Some of these other units will start some processes among which is the Display Manager (fancy terminology which means login prompt). Recently, Ubuntu starts the Gnome Display Manager (GDM) as the first displaying program (gdm.service unit). This program will start the X server before presenting the user login screen (https://en.wikipedia.org/wiki/X_display_manager).
When the display manager runs on the user's computer, it starts the X server before presenting the user the login screen, optionally repeating when the user logs out.
Once logged in, GDM will start several other binaries responsible to let you interact with the system (the actual desktop, a binary to gather input for this desktop, etc). All of these components depend on the X server to work properly.
The DRM
The X server is a user program which makes extensive use of the Direct Rendering Manager (DRM) of the Linux kernel. The DRM is a system call interface which is used to interact with graphics cards. When the DRM detects a graphics card, it exposes a file like /dev/dri/card0 which is a character device (http://manpages.ubuntu.com/manpages/bionic/man7/drm.7.html).
In earlier days, the kernel framework was solely used to provide raw hardware access to
priviledged user-space processes which implement all the hardware abstraction layers. But
more and more tasks were moved into the kernel. All these interfaces are based on ioctl(2)
commands on the DRM character device. The libdrm library provides wrappers for these
system-calls and many helpers to simplify the API.
When a GPU is detected, the DRM system loads a driver for the detected hardware type. Each
connected GPU is then presented to user-space via a character-device that is usually
available as /dev/dri/card0 and can be accessed with open(2) and close(2). However, it
still depends on the grapics driver which interfaces are available on these devices. If an
interface is not available, the syscalls will fail with EINVAL.
The ioctl call allows to have any number of operations on the /dev/dri/card0 file since it is a general call which includes a request argument which is simply an unsigned long. It also takes a variable amount of arguments (see https://man7.org/linux/man-pages/man2/ioctl.2.html).
The ioctl call thus allows hardware vendors (like NVIDIA, AMD, etc) to provide drivers for their cards with the general ioctl call used as a general interface between user mode and kernel mode.
OpenGL
There exists several 3D rendering APIs available (OpenGL, Direct3D). OpenGL is mostly a set of C headers and a convention. The convention says what a certain call should do. It is up to the hardware vendor to implement the convention for their own card. Mesa3D has been an attempt to create an open source implementation of OpenGL for certain graphics cards. It worked quite well for integrated Intel HD Graphics (since documentation is open) and AMD (since they cooperated and offered some insight into the workings of their cards), but not for NVIDIA (the Nouveau driver is mostly not working or slow).
When you program some OpenGL, you include the OpenGL headers and link with libraries provided by hardware vendors which provide the definitions of the functions in the headers. These definitions make use of the DRM and cooperate with the X server to show content on the screen.
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
System calls (provided by the kernel) are often buried (e.g. in some cases deliberately undocumented and proprietary) and should not be used. Almost everything you see are actually normal functions in dynamically linked libraries/shared libraries. This allows the kernel's system calls to be radically changed without breaking everything (because everything only depends on the dynamically linked libraries/shared libraries); and reduces the functionality needed in the kernel itself.
For an example; most of the "system calls for managing windows and GUI components" you think Windows has could (internally, inside the relevant DLL) just end up using a single "send_message()" system call (to tell a different process, the GUI, that you want to create a window or change its position or ...).
For Linux it's roughly similar. The kernel's system calls (which actually are documented, for no sane reason - it goes against the spirit of SYS-V specs and means badly written "linux executables" aren't compatible with other Unix clones like FreeBSD or Solaris or OSX) exist to use things like low level memory management and raw file IO and sockets; but (like Windows) the kernel's system calls are buried under layers of shared libraries, and those shared libraries (e.g. like Xlib, GLib, KWindowSystem, Qt, ...) just use "something" (file IO, pipes, sockets, ...) provided by kernel to talk to another process (display server, GUI, ..).
Linux and Windows fall under separate categories; Linux is just a kernel, i.e. the piece under the hood that gives us the basic functionality we expect to run programs, like threads, memory and process management, etc. Windows is a full operating system, including the user facing components and numerous system libraries. An apter comparison would be a specific Linux distro and Windows.
On that note, distros, as independent operating systems, obviously can have different implementations of any OS component. Some distros, like Arch, don't come with a GUI by default at all. That said, essentially the entire Linux ecosystem uses Xorg and/or Wayland; I would recommend looking into the implementation details of those two.
A Linux GUI has quite a few differences compared to Windows GUI. For example, the GUI is not considered to be a part of the operating system, but rather an external part of it; that means no syscalls (not embedded whatsoever in the OS). After all, like the previous answer says, Linux is a kernel, that means it's only something really basic (allows execution of programs, memory/threads management, processes management, but not really much more). Whatever comes next (GUI, for example) are added features using packages.
This allows, for example, installing a GUI on top of a minimal installation of any Linux distro (CentOS, for example), and that GUI can be the one you want (Gnome, KDE...).

In Linux, how can I intercept keyboard input and optionally filter it?

I'm writing a cross platform application, I want to be able to intercept keyboard input and optionally filter it from reaching the rest application. My application loads plugins, I am trying to stop the keystrokes from reaching the plugin's UI if it has focus.
On Window I use SetWindowsHookExA and on macOS I use [NSEvent addLocalMonitorForEventsMatchingMask:]
Is there an equivalent for Linux?
I'm writing a cross platform application, I want to be able to intercept keyboard input
If your application is a GUI one, consider using a cross-platform framework such as Qt or GTK (or FLTK, FOX, etc...). If your application is command-line (like e.g. grep or GCC or ninja or MongoDB are), it might not even access the keyboard, if used inside some pipeline, and you might also use cross-platform frameworks like POCO. If your software is started by crontab, it won't even have access to the keyboard, which might not even exist or be plugged in.
The same source code (for Qt or GTK or FLTK etc...) will work for Linux and for Windows.
BTW, many Linux computers (e.g. most web servers, or a RasperryPi) don't have any keyboard or mouse.
For more, read Advanced Linux Programming and syscalls(2).
Read about Xorg and Wayland.
My application loads plugins, I am trying to stop the keystrokes from reaching the plugin's UI if it has focus.
Plugins on Linux are often implemented thru dlopen(3) and dlsym(3) as ELF shared objects, conventionally in files named *.so (so see elf(5)). Read the Program Library HowTo and if you code in C++ also the C++ dlopen minihowto. If you code in Ocaml, use the Dynlink module. If you can code in Common Lisp (e.g. using SBCL), you'll just use eval. If you have to code in Java, use some class loader.
With Xorg (that is, X11) every keyboard event is generating some well defined message (some XKeyEvent) emitted - on a tcp(7) or unix(7) socket- by the Xorg server to your Xlib client application.
On the client side (in your GUI application code), an event loop (around poll(2) or select(2)...) is waiting for such messages. See also time(7).
On my Debian system (according to file /var/log/Xorg.0.log and using proc(5)...) the Xorg server is accessing the keyboard (thru udev) as /dev/input/event1 and X11 clients are communicating with the Xorg server.

how linux shows its desktop on screen at kernel-level

I have some question about linux desktop environment.
How linux shows its desktop environment on a screen. I mean where and how its GUI frames generates and send to lcd driver? is it at kernel level? does it have any relation to frame buffer (such fb0)?
Is it possible to access the Desktop GUI of linux and write it on frame buffer to show the window environment of linux?
I have searched a lot but did not find my main answer that how linux Desktop Environment is created and shown by monitor that have been known by drivers on linux.
thank you for your attention.
In Linux there's no internal desktop or anything like that. desktop environments are just regular applications just like other applications. almost all desktop environments at their lowest level interact with another GUI library (e.g Qt, GTK, ...). then all these GUI libraries interact with lower-level software called windowing system or display server or window server.
In Unix systems most used window system is X window system ( simply called as X or X11). almost any GUI library which supports Linux, it works with X.
Wayland is another windowing system which is growing and is supposed to be a good replacement for X, because X window system is too old and have many issues. but X is used almost everywhere in Linux and other Unix based operating systems.
So if you really want to know what's going on down there, you should know linux graphics stack. as i said desktop environments are just high level applications. from windowing system (like X) to lower-level libraries and modules (KMS, DRM, ...), are what you really looking for.
KMS (kernel mode setting) works with display controller and DRM (direct rendering manager) works with graphics card and GPU. (however it's really not as simple as i explained)

Should a bad USB device be able to crash a bug free Linux kernel?

My question is rather broad, I know, but I have been wondering about this for a long time.
A little background. I work in a Physics lab where all the lab computers are running Debian (mix of old version and Lenny) or more recently Ubuntu 10.4 LTS. We have written a lot of custom software to interface with experiment hardware and other computers.
We have a lot of FPGA boards that are controlling various parts of the experiment, these are connected via USB to different computers. After upgrading a computer controlling an experiment we started seeing crashes/lockups of the computer running all the lasers. This used to be completely stable.
My question is this: If the entire computer locks up because of an issue with
a) Python/GTK software gui
b) USB device driver
or
c) The actual device
can this be blamed on the Linux kernel (or other levels of the OS)?
Is it unfair to ask of the linux kernel not to panic even if I make mistakes in my implementation of software/hardware.
My own guess: Any user level applications should never be able to crash the entire system since they should only have access to their own stuff.
Any device driver becomes a part of the kernel itself and will therefore be able to crash it. Is my reasoning sound?
Bonus question: IS there a way to insulate device and kernel somehow such that Linux will keep running happily no matter what stupid mistakes are made with the hardware. That would be very useful for two reasons:
1) debugging is easier with a running system,
2) For the purposes of the experiment we really need long uptimes and having only a part of the system crash is infinitely better than crashes in one part of the system propagating to the rest.
Any links and reading material on this subject would be appreciated. Thank you.
You are correct that unprivileged code should not be able to bring down the system, unless there's a kernel bug. The line between unprivileged and privileged isn't exactly the same as user-space vs kernel, however. A user-mode program can open /dev/kmem and trash the OS's internal data structures, if the user account has superuser privileges.
To insulate the main kernel from device driver problems, run the device driver inside a virtual machine.
Several popular VM systems, including VMWare Workstation, support forwarding an arbitrary USB device from the host to the guest without a device-specific driver on the host.

Resources