Is there a Linux radio standard? - linux

We're about to embark upon implementing a device running Linux that (among other things) will be attached to a software defined FM/AM radio that can also receive RDS data describing playlists and other such stuff. It's a relatively stupid device that mostly contains a DSP or two that act as tuners and otherwise does very little processing of the signal.
I was thinking that kernel drivers for the device and then a userland hardware abstraction layer that provided a standardized interface and abstracted away the details of exactly when the RDS data was received and dealt with error handling and all the other messy stuff. Is there already a userland layer like this? It would be nice to either avoid making it at all, or make our stuff plug-compatible with something that already exists so we could use other projects for the radio UI if we wanted.

Radio support in linux
It sounds like you are creating a new hardware radio device? You'll probably need to build a driver for this device. Some help getting started can be found here, here, and here. If your device is not new, it may already have a driver in the Video4Linux2 project.
It looks like there are some RDS related projects based on the saa6588 kernel module currently.
According to this page, these cards are currently have a SAA6588 chipset:
Terratec Cinergy 600
KNC ONE TV-Station RDS
KNC One TV-Station DVR
TYPHOON TV TUNER CARD RDS
Sundtek MediaTV Pro (supported by the manufacturer)
Sundtek USB FM Radio (FM Transmitter/Receiver, supported by the manufacturer)
RDS specific information
I'd recommend to check out some of the projects related to Video4Linux2 (v4l2), there is an RDS decoding library. This library runs in userspace, so the RDS decoding work can be done there for you:
According to V4L2 specifications, raw data from RDS decoders is read
from the radio device. Data consists of blocks where each block is 3
bytes long. All decoding has to be done in user space.
RDS API
Here is a complete API reference for Video4Linux2. Here is an article series to get acquainted with it.
The particular section for the RDS API is here. This page provides information about how to get an update about whether RDS data is available:
Whether an RDS signal is present can be detected by looking at the rxsubchans field of struct v4l2_tuner: the V4L2_TUNER_SUB_RDS will be set if RDS data was detected.
SDR RDS decoder DSP in Gnu Radio Companion
Although it's not an official API I found one last small project that might be worth looking into:
RDS reception using SDR
Here are some more radio related projects worth looking into.

It may be worth looking into whether the GENIVI consortium (http://www.genivi.org/) has a standard application for this yet. They're developing standards of this sort specifically for automotive "infotainment" purposes, and this seems like it would fall within their area of standardization.
Unfortunately, they don't seem to publish their stuff publicly, so you may need to ask around or send them an email directly.

How about GNU Radio? They have hardware support for lots of software defined radio components, and a data flow easily connected via GUI with their "GNU Radio Companion" (GRC).
They use Python and C++ APIs that can be accessed for your UI layer. There are a number of examples to be found online.

Related

What kind of property should I be using to allow the user to specify the output data type if my device supports two different kinds

I am creating a REDHAWK device that outputs either 16 bit signed complex samples, or VITA49 packets. I want the device to be told which output type it should provide when it is allocated.
How should I go about this?
Should I just add a simple property to the front_end_tuner_allocation struct?
Some other recommended approach.
Is there an example that I might look at?
ANSWER
I was able to speak with an experienced REDHAWK developer.
Apparently in situations where the device is capable of producing different data flows then separate Device node specifications should be created. This means that the output port to be used will be dictated when the device is started and should not be configured at run time.
This actually makes sense given the life-cycle of the device and the component receiving the data.
This is how I rationalize it, the component won't be able to control the device until the allocate-capacity is performed. So there would be no way for the component/waveform to setup the device ahead of time.

Interacting with BLE Cycle Trainer

My current is on Flutter using Dart and working with Bluetooth low energy devices. I have the basics up using this library and am able to do the following
1. Search for devices and list
2. Connect to device
3. Retrieve services and read characteristic values
4. Subscribe to changes on characteristics.
In order to interact with device correctly I need to read and write from the correct services/characteristics to read data and set things like resistance on the flywheel
I have used the below link and have started mapping out the services, however although the reading can be worked out. The writing to characteristics is a bit out of my reach.
https://www.bluetooth.com/specifications/gatt/viewer?attributeXmlFile=org.bluetooth.service.cycling_power.xml
The question is.
Does any have experience working BLE cycle trainers and could provide
some insight into how to read and manage services and characteristics
appropriately?

Yocto Project usb sensor access

I've never worked with the Yocto Project, and barely knows what it is. But I'm investigating the possibility to use a Simatic 2040 as a gateway between an USB hall sensor and industrial PLC network.
The sensor that we want to use is this one. It's designed to use with an Windows desktop PC, connected via USB.
Now my main question is, would it be possible to write software in the Yocto device to capture the sensors data, and share this information with an industrial PLC network.
The industrial PLC network is also Siemens based, so I don't see much problems around that because we can make use of the Node-Red Profinet or Modbus library's.
The question is stated in very general terms, so I will have to answer in very general terms.
Overall the answer to your question is yes, but there are a number of details to sort out (some of them might be show stoppers).
Yocto is a system to generate embedded Linux images and also SDKs (cross compiler toolchain + sysroot).
You might be fine to take an existing Yocto Image for the SIMATIC 2040 and just add your own application to it. For this a matching SDK has to exist. This approach works fine as long as your application has not too many dependencies and you don't need to many modifications off the existing image.
If this is not the case you might be better off generating a custom image as well as an SDK (based on the existing SIMATIC 2040 configuration).
Considering your USB device. The linked data sheet states windows support. Your options?
Talk to the vendor? Does he provide a driver, but doesn't advertise it? Is he willing to hand out a detailed datasheet?
Check if there is a community driver in the mainline kernel?
Reverse engineering the existing Windows driver?
Pick an alternative device with an existing Linux driver (preferably in the mainline kernel).
The right solution depends on the time and effort you are willing and able to put into this.

WiFi Connection's Name

I am Developing a Java ME Application. Here I am using WiFi Connection. Now My Question is how to get a particular WiFi Connections name using Java ME Code ?
My Requirement is for Nokia E5 Device only.
After doing much research work I found that this is not possible in Java ME Technology to fetch the WiFi Connection's Name.
However Similar Library would be com.nokia.multisim.networkid which returns Network ID and Network Short Name.
I Dont Think so it is 100% possible in J2ME and even though if it has worked and there is no guarantee that it will work on all J2ME devices which has Wifi connectivity.
most appropriate answer i have found , please go through it once.
" Much as I hate to put you through all that grief and then not have a simple answer, I don't have a simple answer.
The reason for that is because Java's networking model is based on TCP/IP, and the TCP/IP architecture is based on the idea that applications will neither know nor care about the hardware details of networking. A typical mobile device may contain several different network interfaces (WiFi, Bluetooth, Infrared, USB cable, and so forth), but when an app wants to contact another network node, the app doesn't know which of these interfaces is actually being used. And in fact, if the OS wants to do so, it can use more than one (in parallel) and/or switch interfaces in and out, based on routing criteria such as best measured data rates. Rather like how cell phones route phone calls.
So basic Java/JME won't know anything about WiFi.
However, there is an extension, specified as JSR 309 (http://jsp.org) that looks like it may help. It supports learning about and controlling the network interfaces themselves. The problem is that not all devices will implement this extension, so it will depend on what device(s) you are supporting. "

Record audio from various internal devices in Android (via undocumented API)

I was wondering whether it is possible to capture audio data from other sources like the system out, FM radio, bluetooth headset, etc. I'm particularly interested in capturing audio from the FM radio and already investigated all possibilities including trying to sniff the raw bluetooth communication between the phone and the radio device with no luck. It's too bad Android only allows recording audio from the MIC.
I've looked at the Android source code and couldn't find a backdoor to allow me to do that without rooting the device. Do you, at least, have any idea how to use other devices (maybe access somehow /dev/audio) say via NDK or even better - Java (maybe Reflection?) to trick the system to capture the audio stream from say, the FM radio. (in my case I'm trying to develop the app for the HTC Desire)
PS. And for those of you who are against using undocumented APIs, please don't post here - I'm writing an app that will be for my personal use or even if I ever publish it I will warn the user of possible incompatibilities.
I've spent quite some time deciphering the audio stack, and I think you may try to hijack libaudio. You'll have trouble speaking directly to the hardware (/dev/*) because many devices use proprietary audio drivers. There's no rule in this regard.
However, the audio hardware abstraction layer (HAL) provided by /system/lib/libaudio.so should expose the API described at http://source.android.com/porting/audio.html
The Android system, and especially audioflinger, uses this libaudio HAL to find available devices, deal with routing, and of course to read/write PCM data.
So, you could hijack the interaction between audioflinger and libaudio, by renaming the later, and providing your own libaudio which decorates the real one. Doing so, you should be able to log what happens and very possibly intercept FM radio output, provided that this is not directly handled by the hardware.
Of course, all this requires rooting. Please comment if you manage to do this, that interests me.

Resources