I'm planing to build a simple audio interface. For that I just want to know in which format the ASIO drivers deliver data to a program usually? I couldn't figure that out of the specifications or find that somewhere else. I don't want to write an own driver, I just want to deliver my data in the same format.
I've been doing some ASIO development, and from testing on 7 systems with 10 different soundcards (internal and external), most of them use ASIOSTInt32LSB and some use ASIOSTInt16LSB for output. With these two formats implemented, I've yet to see a soundcard that uses anything else.
Of course this is just plain old trial and error and not an exact approach by any means.
Related
The Microsoft documentation for the WAVEOUTCAPS struct lists a number of formats that an audio device can support:
I do not see any 24-bit variables listed here, although through I have confirmed my sound card is capable of opening a 24-bit output with a call to WaveOutOpen (and playing 24bit audio files through that output).
I'm guessing that Microsoft defined additional variables somewhere for 18/20/24/32/48/64 bit output, but I can't find them. I tried searching on the net and nothing came up, and I tried using Visual Studio to search for variables in my current name space that start with "WAVE_FORMAT_" but did not find any additionally defined formats that way.
Is it possible to check for 4/18/20/24/32/48/64 bit output availability on Windows using the function WaveOutGetDevCap(), or any similar function? If so, how?
waveOutXxx is legacy API you, generally speaking, should not be used nowadays. This API is an emulation layer on top of real audio API and does not have to support 24-bit scenarios which did not exist at the time of waveOutXxx. There are no specific new constants defined for newer formats and there are so many of them that there cannot be a separate bit for every one.
You can compose a WAVEFORMATEX structure that describes your high bitness format (you would typically use WAVEFORMATEXTENSIBLE variant of it) and check it against waveOutOpen function.
Or, rather, use WASAPI and IAudioClient::Initialize, see Rendering a Stream for details and the way WAVEFORMATEX structure is used there.
I need to do some NON-STANDARD signal processing operations with an RFID-reader, so I'd like to know if it is possible to extract antenna's individual analog (actually digital samples right after ADC) input signal samples with Motorola FX7500 (if you know how this works on FX7400 or FX9500, please do tell, could be helpful). Samples would be processed in a JAVA-based host computer program.
What I've already tried:
Investigating Motorola's own RFID3 API's possibilities, it doesn't go deep enough to actually get in touch with input analog signal samples.
Using LLRP to its full extent, it doesn't allow analog signal sample access either. RFsurvey-functionality would have been helpful to some extent, but FX7500 doesn't support it either.
Accessing RFID-reader's linux terminal, trying to find the driver function(s), that could listen the input sample stream. If current input sample(s) could be extracted from the input stream, I could (in theory) make a script, that would save a few of those sample values in a txt-file in the host computer during a tag inventory round. My linux skills are kinda bad, hence I ask this question.
The only realistic way to solution seems to be via linux terminal, so if you folks have any ideas about that (where to look and what to do), please advise!
Contents of reader:
rfidadm#FX7500abcdef:/$ ls -1
apps
bin
dev
etc
home
include
lib
linuxrc
media
mnt
platform
proc
readerconfig
run
sbin
sys
tmp
usr
var
I cannot completely rule that out, but it's highly unlikely you can get the raw signal digitized; the devices you're looking at aren't really software defined radio devices, typically.
"speaking" RFID physically is a bit different from "usual" wireless communication: The reader doesn't only observe the energy transmitted from the tag, but more importantly the fluctuations of energy extracted from the near field of the reader's antenna coil. Hence, you don't actually have a baseband of RF bandpass signal, but hardware-specific modulations of transmitted (and inversely, antenna-reflected) energy. Demodulation is hence usually done in specialized hardware.
However, do not fret: It's totally possible to build a software defined RFID reader. There have been several approaches to that, but personally, I trust these based on Ettus USRPs and/or GNU Radio best. Look through the results IEEExplore gives you, eg. this search.
Most probably this is not possible with the Motorola readers. What you can do, is use one of the RFID chipsets available on the market: either the AMS RFID IC's, or the Impinj RFID IC's. As far as I know, both IC's support retrieving the digital samples that are received. They also have a development kit to test-drive the IC's.
I would like to create a utility in either PHP or Perl to convert an audio file created by the Nortel's Callpilot voice mail system into a wave file. The problem is that the format, which has the .vbk file extension, is unknown to virtually any audio player. To date, I have not found one that will play a .vbk file. I've looked at audio file conversion libraries in CPAN and tried many of them, they don't recognize the file. I was not successful with PHP's audio formats manipulation either. Nortel does provide a converter, however, it does not suite my needs. I would like to have this run via cron on a CentOS system. I don't know how to reverse engineer this format. There seems to be just scraps of info on this format on the web. This page indicates that it is "based on the H.232 format":
https://www.odesk.com/o/jobs/job/Reverse-Engineer-Nortel-VBK-Audio-Format_~~f501f11679f3f6bb/
I know this is a very old thread, but I've recently been looking into converting Nortel's vbk format as well. Importing the vbk files into Audacity with raw data option, Encoding: U-Law, Byte order: little-endian, Channels: 1 Channel (Mono), Sample rate: 8000 Hz. Not sure if they have multiple formats for their vbk files, but mine were from a BCM50 phone system.
Well, this is the joy of closed proprietary systems. But there is a chance they could play nice. Try to contact Callpilot and see if they'll give you the format specs. It's worth a shot.
As for reverse engineering, you need to be able to generate known content. Like a constant tone at 60Hz for exactly 1 second. Then at 50Hz. Then at 10 seconds. Compare them. Isolate the data from the metadata. There is going to be compression involved, so try a handful of common compression schemes, maybe research into Nortel's practices will probably tell you more. If you can feed that into a player and get a tone back out, you're on your way.
There's probably more informed and structured ways to go about reverse engineering, but from my experience it's a lot of trial and error.
On a basic embedded systems speaker with a single line of output, wiggling the output as 0 or 1 in a for given periods produces sound.
I'd like to do something similar on a modern Linux desktop. A brief look-see of Portaudio, OpenAL, and ALSA suggests to me that most people do things at a considerable higher level. That's ok, but not what I'm looking for.
(I've never worked with sounds on Linux before, so if a tutorial exists, I'd love to see it).
Actually, it... kinda is. While you can generate the waveform yourself, you still need to use an API to queue it and send it to the audio hardware; there no longer even exists a sane way to twiddle the audio line directly. Plus you get cross-platform compatibility for free.
[...] embedded systems speaker with a single line of output, wiggling the output as 0 or 1 in a for given periods produces sound.
Sounds a lot like the old PC speaker. You might still find code for it in the Linux kernel.
I'd like to do something similar on a modern Linux desktop.
Then you need AFAIK a driver for ALSA. There you can find infos on how to write an ALSA driver. Use PWM to produce the sound.
Since there are many different sound cards and audio interfaces produced by different companies, there is no uniform way to have a low level access to them. With most sound I/O APIs what you need to do is to generate the PCM data and send that to the driver. That's pretty much the lowest level you can go.
But PCM data is very similar to the 0-1 approach you describe. It's just that you have the in-between options too. 0-1 is 1-bit audio. 8-, 16-, 24-bit audio is what you'll find on a modern sound card. There are also 32- and 64-bit float formats. But they're still similar.
I'm trying to anonymize packets from a pcap file that I have. I need to discard all the packets payloads/content (leaving only header information) and was wondering if there would be a tool that I could use for this (on Linux)? I have thought about using tcpdump with specifying the snaplen but with the header length changing, I don't think that would work.
If there isn't a tool that could accomplish this, a point in the direction of what library for coding would be best(easiest) would work as well. I'd rather not take that route since I have virtually no experience in network programming.
Any help is much appreciated.
You don't need any network programming experience to anonymize the packets. The format of the output file is well documented in the pcap-savefile(5) manpage. You will need to lookup the layouts of the various protocols you'll be handling in order to identify what fields need to be anonymized. You should also look at the link layer header types documentation at tcpdump.org to help you get started.
EDIT: Also look at libpcap itself... according to the pcap-savefile manpage:
NOTE: applications and libraries
should, if possible, use libpcap to
read savefiles, rather than having their own code to read
savefiles.
If, in the future, a new file format is supported by libpcap,
applica-
tions and libraries using libpcap to read savefiles will be
able to
read the new format of savefiles, but applications and
libraries using
their own code to read savefiles will have to be changed to
support the
new file format.