It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have a Raspberry Pi, to access the GPIO pins or hardware peripherals (eg. I2C, SPI), you need to be running the program that accesses these as root. Or you can add the user running that program to the group for that peripheral (eg. the group i2c for I2C).
My question: In the real world (eg, some piece of machinery running embedded linux) is it standard practice to simply add a user to every user group for each peripheral that the program needs? Is there a better way of doing this?
My second question: How does this work when, for example, your using C to directly access hardware registers rather than via /sys. The only ways I can think of doing this is to run as root all the time which is not a good idea at all OR write a kernel module that deals with accessing the registers, while the user space program communicates with that module (which all seems like a lot of work if there are more "recommended" ways). How do programs normally access hardware registers on an embedded Linux setup?
Adding all users to all the required groups would be painful. You have a couple of options. You can make use of the setuid and setgid mechanism. With this mechanism, the process takes on the uid gid of the executable, and can then access the devices with the right access level. Or, you can leverage the sudo mechanism, where you can allow users to execute programs as root with some fine grained control.
The general model in modern OSes is to delegate the hardware access to a kernel resident device driver. In *nix OSes, the device driver then offers an API to the programs in the user space via the standard filesystem calls (open, close, read, write, ioctl). For most drivers, the ioctl call is effectively the kitchen sink for the entire API offered to the user space.
Libraries are provided with Raspberry Pi's Raspbian/Debian release for access to GPIO. Check out the back issues of The MagPi for numerous examples in Python and other languages. And also the example projects here.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
I'll take x86 as an example as I am more aware of x86 stuff than ARM stuff. Also, I may get some information wrong as I've been doing some research on this question while answering. Feel free to correct me if I am wrong.
System booting
Some time ago, Linux used to boot with a legacy bootloader (GRUB legacy version). The GRUB bootloader would be started by the BIOS at 0x7c00 in RAM and then would read the kernel from the hard-disk. It would follow the multiboot specification. The multiboot specification mentions the state that the computer needs to be in before jumping to the kernel's entry point. The kernel would then launch a first process (init) that every process would be a child of.
Today, most Linux distributions boot with UEFI (with the option of legacy booting also available). A UEFI app is placed on the boot partition partionned as a GPT ESP (EFI System Partition). This EFI app is launched and then follows the Linux Boot Protocol to launch Linux. The init process was also replaced by systemd. Linux will thus launch systemd as the first process of the computer. Actually, as stated on the manpage for systemd:
systemd is usually not invoked directly by the user, but is
installed as the /sbin/init symlink and started during early
boot.
The process that will be started is thus /sbin/init but it is a symlink to systemd. The systemd process will then read several configuration files on the hard-disk called units. These units are often targets which specify several units to read. Targets are thus units which specify several units to read. At first systemd will read default.target which specifies several other units. Some of these other units will start some processes among which is the Display Manager (fancy terminology which means login prompt). Recently, Ubuntu starts the Gnome Display Manager (GDM) as the first displaying program (gdm.service unit). This program will start the X server before presenting the user login screen (https://en.wikipedia.org/wiki/X_display_manager).
When the display manager runs on the user's computer, it starts the X server before presenting the user the login screen, optionally repeating when the user logs out.
Once logged in, GDM will start several other binaries responsible to let you interact with the system (the actual desktop, a binary to gather input for this desktop, etc). All of these components depend on the X server to work properly.
The DRM
The X server is a user program which makes extensive use of the Direct Rendering Manager (DRM) of the Linux kernel. The DRM is a system call interface which is used to interact with graphics cards. When the DRM detects a graphics card, it exposes a file like /dev/dri/card0 which is a character device (http://manpages.ubuntu.com/manpages/bionic/man7/drm.7.html).
In earlier days, the kernel framework was solely used to provide raw hardware access to
priviledged user-space processes which implement all the hardware abstraction layers. But
more and more tasks were moved into the kernel. All these interfaces are based on ioctl(2)
commands on the DRM character device. The libdrm library provides wrappers for these
system-calls and many helpers to simplify the API.
When a GPU is detected, the DRM system loads a driver for the detected hardware type. Each
connected GPU is then presented to user-space via a character-device that is usually
available as /dev/dri/card0 and can be accessed with open(2) and close(2). However, it
still depends on the grapics driver which interfaces are available on these devices. If an
interface is not available, the syscalls will fail with EINVAL.
The ioctl call allows to have any number of operations on the /dev/dri/card0 file since it is a general call which includes a request argument which is simply an unsigned long. It also takes a variable amount of arguments (see https://man7.org/linux/man-pages/man2/ioctl.2.html).
The ioctl call thus allows hardware vendors (like NVIDIA, AMD, etc) to provide drivers for their cards with the general ioctl call used as a general interface between user mode and kernel mode.
OpenGL
There exists several 3D rendering APIs available (OpenGL, Direct3D). OpenGL is mostly a set of C headers and a convention. The convention says what a certain call should do. It is up to the hardware vendor to implement the convention for their own card. Mesa3D has been an attempt to create an open source implementation of OpenGL for certain graphics cards. It worked quite well for integrated Intel HD Graphics (since documentation is open) and AMD (since they cooperated and offered some insight into the workings of their cards), but not for NVIDIA (the Nouveau driver is mostly not working or slow).
When you program some OpenGL, you include the OpenGL headers and link with libraries provided by hardware vendors which provide the definitions of the functions in the headers. These definitions make use of the DRM and cooperate with the X server to show content on the screen.
I'm studying Operating Systems. I read Window have lots of system calls for manage windows and GUI components. I have read you can change the GUI manager of your Linux Operating System. Then does Linux have system calls for GUI managements? How GUI works in Linux?
System calls (provided by the kernel) are often buried (e.g. in some cases deliberately undocumented and proprietary) and should not be used. Almost everything you see are actually normal functions in dynamically linked libraries/shared libraries. This allows the kernel's system calls to be radically changed without breaking everything (because everything only depends on the dynamically linked libraries/shared libraries); and reduces the functionality needed in the kernel itself.
For an example; most of the "system calls for managing windows and GUI components" you think Windows has could (internally, inside the relevant DLL) just end up using a single "send_message()" system call (to tell a different process, the GUI, that you want to create a window or change its position or ...).
For Linux it's roughly similar. The kernel's system calls (which actually are documented, for no sane reason - it goes against the spirit of SYS-V specs and means badly written "linux executables" aren't compatible with other Unix clones like FreeBSD or Solaris or OSX) exist to use things like low level memory management and raw file IO and sockets; but (like Windows) the kernel's system calls are buried under layers of shared libraries, and those shared libraries (e.g. like Xlib, GLib, KWindowSystem, Qt, ...) just use "something" (file IO, pipes, sockets, ...) provided by kernel to talk to another process (display server, GUI, ..).
Linux and Windows fall under separate categories; Linux is just a kernel, i.e. the piece under the hood that gives us the basic functionality we expect to run programs, like threads, memory and process management, etc. Windows is a full operating system, including the user facing components and numerous system libraries. An apter comparison would be a specific Linux distro and Windows.
On that note, distros, as independent operating systems, obviously can have different implementations of any OS component. Some distros, like Arch, don't come with a GUI by default at all. That said, essentially the entire Linux ecosystem uses Xorg and/or Wayland; I would recommend looking into the implementation details of those two.
A Linux GUI has quite a few differences compared to Windows GUI. For example, the GUI is not considered to be a part of the operating system, but rather an external part of it; that means no syscalls (not embedded whatsoever in the OS). After all, like the previous answer says, Linux is a kernel, that means it's only something really basic (allows execution of programs, memory/threads management, processes management, but not really much more). Whatever comes next (GUI, for example) are added features using packages.
This allows, for example, installing a GUI on top of a minimal installation of any Linux distro (CentOS, for example), and that GUI can be the one you want (Gnome, KDE...).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I would suspect such a method might not even expose much from an engineering standpoint from the architecture and hence to not even be intrusive to the involved entity's intellectual property.
Probably not, because the JIT compiler would have to compile that code in real time first, and since drivers hook into the kernel, the kernel would be the only real program that could do that, and since this arrives the possibility of security issues I would imagine such a scenario would not be practical.
Another reason for that being impossible in general is that the internal kernel API is different, even conceptually, between Windows and Linux.
So in general a Windows driver is based on resources and functions that are Windows specific (and vice-versa).
Some clever people could do an ndiswrapper but I guess they had to simulate the Windows kernel specific API for wifi...; doing that for other types of drivers (graphics, ...) may be practically impossible.
A practical advice is to avoid buy hardware without Linux drivers (preferably free software ones). This put a market pressure on hardware manufacturers.
My question is rather broad, I know, but I have been wondering about this for a long time.
A little background. I work in a Physics lab where all the lab computers are running Debian (mix of old version and Lenny) or more recently Ubuntu 10.4 LTS. We have written a lot of custom software to interface with experiment hardware and other computers.
We have a lot of FPGA boards that are controlling various parts of the experiment, these are connected via USB to different computers. After upgrading a computer controlling an experiment we started seeing crashes/lockups of the computer running all the lasers. This used to be completely stable.
My question is this: If the entire computer locks up because of an issue with
a) Python/GTK software gui
b) USB device driver
or
c) The actual device
can this be blamed on the Linux kernel (or other levels of the OS)?
Is it unfair to ask of the linux kernel not to panic even if I make mistakes in my implementation of software/hardware.
My own guess: Any user level applications should never be able to crash the entire system since they should only have access to their own stuff.
Any device driver becomes a part of the kernel itself and will therefore be able to crash it. Is my reasoning sound?
Bonus question: IS there a way to insulate device and kernel somehow such that Linux will keep running happily no matter what stupid mistakes are made with the hardware. That would be very useful for two reasons:
1) debugging is easier with a running system,
2) For the purposes of the experiment we really need long uptimes and having only a part of the system crash is infinitely better than crashes in one part of the system propagating to the rest.
Any links and reading material on this subject would be appreciated. Thank you.
You are correct that unprivileged code should not be able to bring down the system, unless there's a kernel bug. The line between unprivileged and privileged isn't exactly the same as user-space vs kernel, however. A user-mode program can open /dev/kmem and trash the OS's internal data structures, if the user account has superuser privileges.
To insulate the main kernel from device driver problems, run the device driver inside a virtual machine.
Several popular VM systems, including VMWare Workstation, support forwarding an arbitrary USB device from the host to the guest without a device-specific driver on the host.
I'm working on one piece of a very high performance piece of hardware that works under Linux. We'd like to cache some data but we're worried about memory consumption - so the idea is to create a user process to manage the cache. That way, the cache can be in virtual memory, not in kernel space, et cetera.
The question is: what's the best way to do this? My first instinct is to have the kernel module create a character device file, and have a user program that opens that file, then sits on a select statement waiting for commands to arrive on it. But I'm concerned that this might not be optimal. A friend mentioned he knew of a socket-based interface, but when pressed he couldn't provide any details....
Any suggestions?
I think you're looking for the netlink interface. See Why and How to Use Netlink Socket [sic] for more information. Be careful of security issues when talking between the kernel and user space; there was a recent vulnerability when udev neglected to check that messages were coming from the kernel rather than user space.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I was always attracted to the world of kernel hacking and embedded systems.
Has anyone got good tutorials (+easily available hardware) on starting to mess with such stuff?
Something like kits for writing drivers etc, which come with good documentation and are affordable?
Thanks!
If you are completely new to kernel development, i would suggest not starting with hardware development and going to some "software-only" kernel modules like proc file / sysfs or for more complex examples filesystem / network development , developing on a uml/vmware/virtualbox/... machine so crashing your machine won't hurt so much :) For embedded development you could go for a small ARM Development Kit or a small Via C3/C4 machine, or any old PC which you can burn with your homebrew USB / PCI / whatever device.
A good place to start is probably Kernelnewbies.org - which has lots of links and useful information for kernel developers, and also features a list of easy to implement tasks to tackle for beginners.
Some books to read:
Understanding the Linux Kernel - a very good reference detailing the design of the kernel subsystems
Linux Device Drivers - is written more like a tutorial with a lot of example code, focusing on getting you going and explaining key aspects of the linux kernel. It introduces the build process and the basics of kernel modules.
Linux Kernel Module Programming Guide - Some more introductory material
As suggested earlier, looking at the linux code is always a good idea, especially as Linux Kernel API's tend to change quite often ... LXR helps a lot with a very nice browsing interface - lxr.linux.no
To understand the Kernel Build process, this link might be helpful:
Linux Kernel Makefiles (kbuild)
Last but not least, browse the Documentation directory of the Kernel Source distribution!
Here are some interesting exercises insolently stolen from a kernel development class:
Write a kernel module which creates the file /proc/jiffies reporting the current time in jiffies on every read access.
Write a kernel module providing the proc file /proc/sleep. When an application writes a number of seconds as ASCII text into this file ("echo 3 > /proc/sleep"), it should block for the specified amount of seconds. Write accesses should have no side effect on the contents of the file, i.e., on the read accesses, the file should appear to be empty (see LDD3, ch. 6/7)
Write a proc file where you can store some text temporarily (using echo "blah" > /proc/pipe) and get it out again (cat /proc/pipe), clearing the file. Watch out for synchronisation issues.
Modify the pipe example module to register as a character device /dev/pipe, add dynamic memory allocation for write requests.
Write a really simple file system.
An absolute must is this book by Rubini. (available both as a hardcopy or a free soft copy)
He gives implementations of several dummy drivers that don't require that you have any hardware other than your pc. So for getting started in kernel development it's the easiest way to go.
As for doing embedded work I would recommend purchasing one of the numerous SBC (single board computers) that are out there. There are a number of these that are based on x86 processors, usually with PC/104 interfaces (electrically PC/104 is identical to the ISA bus standard, but based on stackable connectors rather than edge connectors - very easy to interface custom hardware to)
They usually have vga connectors that make it easier to do debugging.
For embedded Linux hacking, simple Linksys WRT54G router that you can buy everywhere is a development platform on its own http://en.wikipedia.org/wiki/Linksys_WRT54G_series:
The WRT54G is notable for being the first consumer-level network device that had its firmware source code released to satisfy the obligations of the GNU GPL. This allows programmers to modify the firmware to change or add functionality to the device. Several third-party firmware projects provide the public with enhanced firmware for the WRT54G.
I've tried installing OpenWrt and DD-WRT firmware on it. You can check those out as a starting point for hacking on a low-cost platform.
For starters, the best way is to read a lot of code. Since Linux is Open Source, you'll find dozens of drivers. Find one that works in some ways like what you want to write. You'll find some decent and relatively easy-to-understand code (the loopback device, ROM fs, etc.)
You can also use the lxr.linux.no, which is the Linux code cross-referenced. If you have to find out how something works, and need to look into the code, this is a good and easy way.
There's also an O'Reilly book (Understanding the Linux Kernel, the 3rd edition is about the 2.6 kernels) or if you want something for free, you can use the Advanced Linux Programing book (http://www.advancedlinuxprogramming.com/). There are also a lot of specific documentation about file systems, networking, etc.
Some things to be prepared for:
you'll be cross-compiling. The embedded device will use a MIPS, PowerPC, or ARM CPU but won't have enough CPU power, memory, or storage to compile its own kernel in a reasonable amount of time.
An embedded system often uses a serial port as the console, and to lower the cost there is usually no connector soldered onto production boards. Debugging kernel panics is very difficult unless you can solder on a serial port connector, you won't have much information about what went wrong.
The Linksys NSLU2 is a low-cost way to get a real embedded system to work with, and has a USB port to add peripherals. Any of a number of wireless access points can also be used, see the OpenWrt compatibility page. Be aware that current models of the Linksys WRT54G you'll find in stores can no longer be used with Linux: they have less RAM and Flash in order to reduce the cost. Cisco/Linksys now uses vxWorks on the WRT54G, with a smaller memory footprint.
If you really want to get into it, evaluation kits for embedded CPUs start at a couple hundred US dollars. I'd recommend not spending money on these unless you need it professionally for a job or consulting contract.
I am completely beginner in kernel hacking :) I decided to buy two books "Linux Program Development: a guide with exercises" and "Writing Linux Device Drivers: a guide with exercises" They are very clearly written and provide good base to further learning.