Difference between an API and a device driver
From the above link i read that API is like a specification that describes what to do, while a driver is an implementation that describes how to do it.
Now, i couldn't find API in linux for display, audio etc.I have also read on internet that linux provides device files to interact with device drivers. we can communicate to devices by writing or reading in those files but as written above API is the specification that describes what to do and API layer is missing here.so, i don't know what commands to writes in those files to interact with devices. ex-rasterize a image on display with the help of these device files.
Device files are just a practical way to communicate between user space and the kernel. Some device files (most notably, block devices) have a uniform API to them, but that's kinda besides the point.
For most standard operations, you would not interact directly with a device file, but instead use a library, exposing a documented API, for doing what you want. So, if you want to play sound files, you'll use, e.g., libjack, or even a higher abstraction layer, such as gstreamer or libvlc.
It is possible, and even likely, that those library use a device file for their actual output. You need not deal with that, unless you want to.
In other cases, you do want to open the device file and interact with it. In those cases, you need to read the relevant documentation to see how to do that. Some device files merely accept read and write requests. Others, such as tty devices, have ioctl commands that modify how they work. The man page for the relevant device will tell you what you need to know.
In general, many treat device files as extension of the kernel's API. In fact, many call the ioctl command "user defined syscalls". In all cases, just read the documentation to see what you need to do.
Related
I am doing some research on the WebUSB API for our company because we are going to start to manufacture devices in house.
Our current device manufacture comes with an application so the team can plug the device into a computer and diagnose it. Their application allows us to read outputs from the device, as well as pushing commands/configuration to the device over a wired connection.
Since this device is 100% ours, we are also responsible for building out the diagnostic tooling. We need some sort of interface that allows a user to read outputs and send commands/configuration to the device over a wired USB connection.
Is the webUSB the correct API? If not, what are some suggestions for accomplishing the requirement? Are we limited to building some sort of desktop or mobile application?
I would recommend resources below to read to help you understand if the WebUSB API fits your needs or not:
https://web.dev/devices-introduction/ helps you pick the appropriate API to communicate with a hardware device of your choice.
https://web.dev/build-for-webusb/ explains how to build a device to take full advantage of the WebUSB API.
From what you describe, WebUSB isn't strictly required but won't hurt either.
First and foremost, you will need to implement the USB interfaces reading data and sending configurations. It will be a custom protocol, and not one of the standard USB device classes such as HID, video or mass storage. The details of the protocol and if you use control, interrupt or bulk transfers is your choice.
I'm assuming you will connect the devices to Windows PCs, and you likely don't want to put money into writing device drivers. If so, the easiest approach is to add the required descriptors and control requests required for Microsoft OS 2.0 Descriptors. That way, the WinUSB driver will be installed automatically when the device is plugged in.
Using the WinUSB API, a Windows application will then be able to communicate with the USB device. No additional drivers are needed. (On macOS and Linux it's even easier as you don't need the Microsoft OS 2.0 Descriptors in the first place.)
On top of that you can implement the additional descriptors and control requests for WebUSB. It will provide the additional benefit that you can write a web application (instead of a native application) for communicating with the USB device. (Currently, you are restricted to the Chrome browser.) WebUSB devices should implement the WinUSB descriptors as the alternative (.INF files, manual installation process) is a pain.
The already mentioned web page https://web.dev/build-for-webusb/ is a complete example of how to implement it.
Is there a way to expose own RS232 AVR device as Linux file system device e.g. /dev/avr_device? The program must be written as kernel space module or in user space? Is this possible to do by libfuse? Maybe should I use FIFO pipes as communication channel with device?
To be able to mount a device, in which you have installed a linux filesystem, you need that device to be a block device, but a serial tty device is a char device, incompatible with that.
To be able to solve that problem in the classical view of the system, you need to develop a block device driver, that attaches to that char device (the serial port) and uses it to control de block device emulation protocol, this means to convert the block number and block data into packets to be sent over the serial line to a receiver at the other side an implement the block device details of being some kind of storage device. This can be done with some effort.... the problem is if using a slow serial line will be of interest to simulate any kind of storage.
The advantage of the last approach is that you only have to simulate a block device and will be able to create any local filesystem available for linux.
On a higher level, you can implement a filesystem type, which is a higher level abstraction (fuse allows you for this) but this makes that a more difficult problem, as you have to implement every filesystem primitives (and believe me, there are far more primitives to emulate a filesystem than a block device) to implement every remote primitive as a set of local primitives (this can be unfeasible for a single programmer only)
This second approach fixes completely the functionality of the filesystem, and fixes completely the set of operations you can do to files to the implemented primitives you write. It is far more difficult and normally lacks uniformity with the rest of the system, so I should not recommend you to follow this approach.
The second approach has only one advantage, and it is: as the filesystem uses high level primitives, these can be encoded more compactly into network messages and be more efficiently transmitted over the line, giving more speed for a slow connection. But at the cost of having to implement all the filesystem functionality, and loosing uniformity on the use of these kind of filesystems (you have to implement user access, security, caching of requests, etc).
In the first approach, you have only to implement 4 or 5 primitives, and you get all the functionality of any filesystem that can be installed on a block device.
I would like to create a web application that sends and receives ALSA MIDI messages on Linux. Only one web client is intended.
What kind of architecture / programs do I need for that?
I am familiar with django but can't find the missing link to ALSA (or any system with a gateway to ALSA on my Ubuntu machine). Also, I have the small program ttymidi (http://www.varal.org/ttymidi/), that sends messages from a serial port to ALSA.
You might be able to use Python's miscellaneous operating system interfaces, but web applications aren't often designed in this way. You may also have to worry about latency and buffering in your program.
The simplest way of doing what you want without any third-party library is to use pyALSA, which is the official python wrapper around the C ALSA library.
I recommend you dealing with the Sequencer API instead of the RawMIDI stuff, which is lower-level. Check out some of the test apps and the C API documentation, it will definetely help you to write your code.
for Linux, there is a nifty little library called xbindkeys that (surprise) binds commands of your choice to certain key combinations.
I am looking for something similar, except for a system hardware event. When I plug in my headphones to the output jack on my computer, I would like to be able to call a program. It would also be nice to be able to bind to the event when I un-plug my headphones.
Does anybody know if this is possible? Maybe through some cool Python X11 library?
Thanks in advance.
EDIT: Found the API for the jack abstraction layer: http://www.alsa-project.org/~tiwai/alsa-driver-api/ch06s02.html
Sadly, this only allows for polling of the device, not an event handler.
You probably want to use udev for this. I haven't used libudev, but here's something I found:
libudev - Monitoring Interface
libudev also provides a monitoring interface. The monitoring interface
will report events to the application when the status of a device
changes. This is useful for receiving notification when devices are
connected or disconnected from the system.
The actions are returned as the following strings:
add - Device is connected to the system
remove - Device is disconnected from the system
change - Something about the device changed
move - Device node was moved, renamed, or re-parented
That article goes on to show how it obtains a file descriptor via udev_monitor_get_fd, which it later monitors via select.
Most modern Linux desktops (notably Gnome and KDE) use "DBus".
DBus, in turn, utilizes HAL (older) and/or udev (newer).
Here are a couple of links that explain further:
https://www.linux.com/news/hardware/peripherals/180950-udev
http://w3.linux-magazine.com/issue/71/Dynamic_Device_Management_in%20Udev.pdf
http://dbus.freedesktop.org/doc/dbus-tutorial.html
I was wondering whether it is possible to capture audio data from other sources like the system out, FM radio, bluetooth headset, etc. I'm particularly interested in capturing audio from the FM radio and already investigated all possibilities including trying to sniff the raw bluetooth communication between the phone and the radio device with no luck. It's too bad Android only allows recording audio from the MIC.
I've looked at the Android source code and couldn't find a backdoor to allow me to do that without rooting the device. Do you, at least, have any idea how to use other devices (maybe access somehow /dev/audio) say via NDK or even better - Java (maybe Reflection?) to trick the system to capture the audio stream from say, the FM radio. (in my case I'm trying to develop the app for the HTC Desire)
PS. And for those of you who are against using undocumented APIs, please don't post here - I'm writing an app that will be for my personal use or even if I ever publish it I will warn the user of possible incompatibilities.
I've spent quite some time deciphering the audio stack, and I think you may try to hijack libaudio. You'll have trouble speaking directly to the hardware (/dev/*) because many devices use proprietary audio drivers. There's no rule in this regard.
However, the audio hardware abstraction layer (HAL) provided by /system/lib/libaudio.so should expose the API described at http://source.android.com/porting/audio.html
The Android system, and especially audioflinger, uses this libaudio HAL to find available devices, deal with routing, and of course to read/write PCM data.
So, you could hijack the interaction between audioflinger and libaudio, by renaming the later, and providing your own libaudio which decorates the real one. Doing so, you should be able to log what happens and very possibly intercept FM radio output, provided that this is not directly handled by the hardware.
Of course, all this requires rooting. Please comment if you manage to do this, that interests me.