I'm trying to write a slightly modified CAN protocol for SocketCAN. The SocketCAN documentation has a short section about this:
5.3 writing own CAN protocol modules
To implement a new protocol in the protocol family PF_CAN a new
protocol has to be defined in include/linux/can.h .
The prototypes and definitions to use the SocketCAN core can be
accessed by including include/linux/can/core.h .
In addition to functions that register the CAN protocol and the
CAN device notifier chain there are functions to subscribe CAN
frames received by CAN interfaces and to send CAN frames:
can_rx_register - subscribe CAN frames from a specific interface
can_rx_unregister - unsubscribe CAN frames from a specific interface
can_send - transmit a CAN frame (optional with local loopback)
For details see the kerneldoc documentation in net/can/af_can.c or
the source code of net/can/raw.c or net/can/bcm.c .
(https://www.kernel.org/doc/Documentation/networking/can.txt)
The problem is I can't find some of the files referenced here. I'm not super familiar with the Linux kernel, so I don't know if I'm looking in the wrong place. I can find include/linux/can.h and the directory include/linux/can/ but there is no core.h file there. Additionally, I can't locate the net/ directory that is referenced.
Other info:
I am able to send and receive raw CAN frames, so I believe I have SocketCAN set up correctly
Contents of directory (where core.h should be):
beaglebone:~# ls /usr/include/linux/can/
bcm.h error.h gw.h netlink.h raw.h
I'm using Debian on a BeagleBone Black (I'm not sure if the embeddedness of my system makes a difference)
If someone can help point me to where I should be looking for these files, I would be very obliged.
Many thanks!
The CAN protocol is implemented in hardware; attempts to make packets that don't comply with the standard won't work with compliant hardware.
Related
I'm trying to communicate to a custom piece of hardware from a (userspace) C++ program. The device is an HID device, but not a mouse/keyboard.
On Windows, I can use HidD_SetOutputReport to send a report, and then HidD_GetInputReport to receive the reply. (There is more than one report being generated, but those calls let me specify which one I want.)
I'm not doing anything fancy, so it's nice and straightforward.
I am having trouble figuring out what the simple Linux alternative to those calls is.
If someone could point me towards documentation or a code example that illustrates equivalent operations on Linux, I would be very grateful.
Thank you.
If your device is a HID device then for sending the HID report you need to write to the corresponding /dev/hid* device. This will be HidD_SetOutputReport alternate.
Most of the devices now using EP0 for backward communication, so for getting the response you need to read from corresponding /dev/hid* device. This will beHidD_GetInputReport alternate.
If your hardware are not using the EP0 for communication then you can find the information from your Endpoint descriptor, in which it will be defined how to get the response back.
I'm hoping to just use the header in the kernel, linux/nl80211.h to get the channel my network device is on. I'm on a very restricted system where building has to happen with a minimum number of extra packages. It feels strange that SIOCGIWFREQ would be so easy to get, but I'd need a library to just get a frequency via nl80211.
Are there any examples of how to use the nl80211 interface directly in Linux? I'm just hoping to get NL80211_FREQUENCY_ATTR_FREQ
After a lot of struggling, I found out! It's actually easier to use netlink without libnl, as long as you're not doing anything complicated.
I wrote up an example here that prints all your wireless devices, what networks and channels they're connected to: https://github.com/cnlohr/netlink_without_libnl/blob/master/without_libnl.c
I want to write dummy ALSA compliant driver as a loadable kernel module. When accessing it by aplay/arecord throught the ALSA-lib, let's say, it must behave as normal 7.1 channel audio device providing all the basic controls at least - sampling rates, number of channels, format, etc...
Underneath it will just get every channel from the audio stream and will send it through the network as UDP packet stream.
It must be capable to be loaded multiple times and ultimately it would expose as many as want audio devices under /dev. In that way we will have multiple virtual sound cards in the system.
What should be the minimal structure of such a kernel module?
Can you give me an example skeleton (at least the interfaces) to be 100% ALSA compliant?
ALSA driver examples are so poor...
I think I've just found what I need.
There are no better ALSA interface examples than "dummy" and "aloop" templates under sound/drivers directory in the kernel tree:
https://alsa-project.org/main/index.php/Matrix:Module-dummy
https://www.alsa-project.org/main/index.php/Matrix:Module-aloop
I'll need to implement the network part only.
EDIT:
Adding yet another project for a very simple but essential virtual ALSA driver:
https://alsa-project.org/main/index.php/Minivosc
EDIT 2020_09_25:
Yet another great ALSA example:
https://www.openpixelsystems.org/posts/2020-06-27-alsa-driver/
install alsa-base alsa-util
modprobe snd-dummy
use alsamixer or use mocp(need install moc) to conifg add dummy-audio success
I need to make an old Linux box running 2.6.12.1 kernel communicate with an older computer that is using:
ISO 8602 Datagram (connectionless service) 1987 12 15 (1st Edition)
ISO 8073 Class 4 (connection oriented service)
These are using "Inactive Network Layer" subset. (I am pretty sure this means I do not have to worry about routing. The two end points are hitting each other with their mac addresses.)
I have a kernel module that implements the connectionless part. In order to get the connection oriented service operational, what is the best approach? I have been taking the approach of adding in the struct proto_ops .connect, .accept, .listen functions to my existing connectionless driver by referring to the tcp implementation.
Maybe there is a better approach? I am spending a lot of time trying to decide what the tcp code is doing and then deciding if that is relevant to my needs. For example, the Nagle algorithm isn't needed because I don't have small bits of data being transmitted. In addition, there are probably a lot of error recovery and flow control stuff I don't need because I know the data that the two endpoints are transmitting and how frequently they transmit it. My plan is to implement this first with whatever simplistic (if any) packet retransmission, sequencing, etc.. to the point where my wireshark looks similar to the wireshark capture I have from the live system. Then try mine against the real thing and then add in whatever error recovery/retransmit stuff seems necessary. In other words, it is a pain in the rear trying to determine what is the guts of the tcp/stream implementation that I want to copy vs the extra error correction/flow control stuff that I might never need.
I found \net\core\stream.c which says:
* Generic stream handling routines. These are generic for most
* protocols. Even IP. Tonight 8-).
* This is used because TCP, LLC (others too) layer all have mostly
* identical sendmsg() and recvmsg() code.
* So we (will) share it here.
This suggested to me that maybe there might be a simpler stream thingy that I can start from. Can someone recommend a more basic streams driver that I should start from instead of tcp?
Is there any example code that provides a basic stream implementation?
I made a user level library to implement the protocol providing my own versions of open/read/write/select etc. If anyone else cares, you can find me at http://pnwsoft.com
Do not attempt to use openss7. It is a total waste of time.
Does anyone know of a good way to get a bi-directional dump of MIDI SysEx data on Linux? (between a Yamaha PSR-E413 MIDI keyboard and a copy of the Yamaha MusicSoft Downloader running in Wine)
I'd like to reverse-engineer the protocol used to copy MIDI files to and from my keyboard's internal memory and, to do that, I need to do some recording of valid exchanges between the two.
The utility does work in Wine (with a little nudging) but I don't want to have to rely on a cheap, un-scriptable app in Wine when I could be using a FUSE filesystem.
Here's the current state of things:
My keyboard connects to my PC via a built-in USB-MIDI bridge. USB dumpers/snoopers are a possibility, but I'd prefer to avoid them if possible. I don't want to have to decode yet another layer of protocol encoding before I even get started.
I run only Linux. However, if there really is no other option than a Windows-based dumper/snooper, I can try getting the USB 1.1 pass-through working on my WinXP VirtualBox VM.
I run bare ALSA for my audio system with dmix for waveform audio mixing.
If a sound server is necessary, I'm willing to experiment with JACK.
No PulseAudio please. It took long enough to excise it from my system.
If the process involves ALSA MIDI routing:
a virtual pass-through device I can select from inside the Downloader is preferred because it often only appears in an ALSA patch bay GUI like patchage an instant before it starts communicating with the keyboard.
Neither KMIDIMon nor GMIDIMonitor support snooping bi-directionally as far as I can tell.
virmidi isn't relevant and I haven't managed to get snd-seq-dummy working.
I I suppose I could patch ALSA to get dumps if I really must, but it's really an option of last resort.
The vast majority of my programming experience is in Python, PHP, Javascript, and shell script.
I have almost no experience programming in C.
I've never even seen a glimpse of kernel-mode code.
I'd prefer to keep my system stable and my uptime high.
This question has been unanswered for some time and while I do not have an exact answer to your problem I maybe have something that can push you in the right direction (or maybe others with similar problems).
I had a similar albeit less complex problem when I wanted to sniff the data used to set and read presets on an Akai LPK25 MIDI keyboard. Similar to your setup the software to setup the keyboard can run in Wine but I also had no luck in finding a sniffer setup for Linux.
For the lack of an existing solution I rolled my own using ALSA MIDI routing over virmidi ports. I understand why you see them as useless because without additional software they cannot help at sniffing MIDI traffic.
My solution was programming a MIDI relay/bridge in Java where I read input from a virmidi port, display the data and send it further to the keyboard. The answer from the keyboard (if any) is also read, displayed and finally transmitted back to the virmidi port. The application in Wine can be setup to use the virmidi port for communication and in theory this process is completely transparent (except for potential latency issues). The application is written in a generic way and not hardcoded to my problem.
I was only dealing with SysEx messages of about 20 bytes length so I am not sure how well the software works for sniffing the transfer of large amounts of data. But maybe you can modify it / write your own program following the example.
Sources available here: https://github.com/hiben/MIDISpy
(Java 1.6, ant build file included, source is under BSD license)
I like using aseqdump for that.
http://www.linuxcommand.org/man_pages/aseqdump1.html
You could use virtual midi devices for this purpose. So you have to load snd_seq_dummy so that it creates at least two ports:
$ sudo modprobe -r snd_seq_dummy
$ sudo modprobe snd_seq_dummy ports=1 duplex=1
Then you should have a device named Midi through:
$ aconnect -i -o -l
client 0: 'System' [type=kernel]
0 'Timer '
1 'Announce '
client 14: 'Midi Through' [type=kernel]
0 'Midi Through Port-0:A'
1 'Midi Through Port-0:B'
client 131: 'VMPK Input' [type=user,pid=50369]
0 'in '
client 132: 'VMPK Output' [type=user,pid=50369]
0 'out '
I will take the port and device numbers form this example. You have to inspect them yourself according to your setup.
Now you plug your favourate MIDI Device to the Midi Through ports:
$ aconnect 132:0 14:0
$ aconnect 14:0 131:0
At this time you have a connection where you can spy on both devices at the same time. You could use aseqdump to spy the MIDI conversation. There are different possibilities. I suggest to spy the connection between the loopback devices and the real device. This allows you for rawmidi connections to the loopback devices.
$ aseqdump -p 14:0,132:0 | tee dump.log
Now everything is set up for use. You just have to be careful about port names in your MIDI application. It should read MIDI data from Midi Through Port-0:B and write data to Midi Through Port-0:B.
Some additional hint: You could use the graphical frontend patchage for connecting and inspecting the MIDI connections via drag and drop. If you do this you will see that every Midi Through port occurs twice once as input and once as output. Both have to be connected in order to make this setup work.
If you want to use GMidiMonitor or some other application you spy on both streams intermixed (without showing the direction) using aconnect suppose 129:0 is the Midi Monitor port :
$ aconnect 14:0 129:0
$ aconnect 132:0 129:0
If you want to have exact direction information you could add another GMidiMonitor instance that you connect only to one of the ports. The missing messages come from the other port.
What about using gmidimonitor? See http://home.gna.org/gmidimonitor/