I am having trouble understanding how to send BCCMD commands to a CSR 8675 audio chip
For example, I would like to try out the bluecore commands described in the Audio API design document by CSR
I have ADK 4.0.1 installed as well as Bluesuit 2.6, and the standard 8675 development board provided by CSR (H13179v2)
I have found CSR documents that describe a BCCMD command structure, but not a simple explanation on how to get started with actually sending BCCMD command
I would like to know what I need to get started using basic commands, such as chip reset, firmware version read, and other audio API commands such as stream_get_source
Related
I'm trying to pair from a Linux host (ARM based, Angstrom distribution) to a MCU driven embedded device using BLE Just Works Secure Connection. As a device I'm currently using an ESP32 dev kit flashed with the GATT security example. However, so far my tries weren't successful and I failed to find the according documentation, either.
I managed to pair my Android smartphone with the device, so pairing on the device side, in general, seems to work. I also tried to conduct the pairing without a Secure Connection (setting Authorization Request to SP_LE_AUTH_BOND) which worked with bluetoothctl or btmgmt.
I'm grateful for any documentation pointer how to perform pairing from the command line, Python scripting or any C/C++ code.
Have a look at the answer below and the included references; these cover pairing using BlueZ/Linux:-
Raspberry Pi BLE Encryption/Pairing
If this still doesn't work, please launch "btmon" on another terminal before starting the pairing process as that will give you an indication as to what is going wrong.
I hope this helps.
I want to implement a BLE in a Raspberry which sends the result of a sensor apart from it's characteristics and make another Raspberry to obtain that data.
Because the language that offers the possibility to read data from the sensor is written in C, C++ and Python, I have been searching through multiple libraries like pygattlib, pygatt, pybluez and bluepy with no result to know how to send data in addition with their characteristics.
Is there any option to reproduce my request?.
I also read about iBeacon and Eddystonne protocol from Apple and Google, however my first point is to comunicate between two Raspberry (server and client).
If you are using rpi you should have Bluez preinstalled. Bluez provides API through D-Bus which you can use to add GAP and GATT functionality. The documentation is in source code of Bluez.
BLE advertising (GAP profile) documentation: https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/advertising-api.txt
BLE data transfer (GATT profile) documentation: https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/gatt-api.txt
Of course it easier to have an example. They are in Bluez repo too! They are written in Python but it should be easy to translate it to different language because they are using only D-Bus.
https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/test/example-gatt-server
https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/test/example-gatt-client
https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/test/example-advertisement
I'll mark this as an answer because I could make it possible with the libraries written in javascript noble and bleno
I am trying to build a device that will encode h.264 video on a raspberrypi and stream it out to a separate web server in the cloud. The main issue I am having is most implementations I search for either have the web server directly on the pi or have the embedded player playing video directly from the device.
I would like it to be pretty much plug and play no matter what network I am on ie no port forwarding of any sort all I need to do is connect the device to the network and the stream will be visible on a webpage.
One possible solution to the issue is just simply encode frames in base 64 as jpegs and send them to a an endpoint on the webserver, however, this is a huge waste of bandwidth and wont allow for the framerate h.264 would.
Any idea on some possible technologies that could be used to do this?
I feel like it can be done with some websockets or zmq and ffmpeg somehow but I am not sure.
It would be helpful if you could provide more description of the architecture of the device. Since it is an RPI, it is probably also being used for video acquisition via the camera expansion port. If this is the case, you can access the video device and do quite a bit with respect to streaming using the combination of available command line tools.
Something like the following will produce an RTMP stream from the video camera host.
raspivid [preferred options] -o - | ffmpeg -i - [preferred options] rtmp://[IP ADDR]/[location]
From there, FFmpeg will do a lot of heavy lifting for you.
This will now enable remote hosts to access the RTMP stream.
Other tools that would complement that architecture may be ffserver where the rtmp stream from the rpi host could be acquired and then be made available to a variety of clients such as a player in a webpage. Quick look shows ffserver may be obsolete, but that there are analogous components.
I read an interesting article about coding for the AR Drone 2.0 from Parrot. In this code they us nodeJS to talk to the drone. Therefore the code starts out with creating a Stream to /dev/ttyO0
I am starting out to learn more about the background of linux functionalities and would like to know:
How do you initially find out that the dev/ttyO0 is being used, for example on the drone which runs on linux. It is kind of reverse engineering I think, but what tools or commands are being used therefore?
When I want to reverse engineer a system like the drone, and find out which commands are being sent, is there something like a "sniffer" to find out what commands are being sent?
I know this is not a short and easy answer, but I would be happy to learn more about that or find out, where to learn about that. But initially the question about finding the right device would be very interesting.
Thank you
I don't know the answer to the first part of your question, but I can address the second part.
Yes, the AR.Drone uses TCP and UDP for all communications between the drone and the controller app, including commands, telemetry and video. You can use a standard network sniffer, like tcpdump or Wireshark. When you connect to the drone, its default IP address is 192.168.1.1. Configure the sniffer to capture all traffic to and from that address. Here are some highlights of what you can see:
Command/"AT" comms, UDP on port 5556: This port is used to send commands to the drone. Commands are in ASCII, and look like AT*..., for example AT*REF=7,256 or AT*PCMD=7,1,-1110651699,0,0,1050253722. Section 6 of the AR.Drone Developer Guide describes most (but not all) of the commands.
Navdata, UDP on port 5554: This is binary encoded data sent from the drone containing sensor data and information about the state of the drone. It includes things like air pressure, altitude estimate, position estimate, flying mode, and GPS (if your drone is equipped with one). Since you mentioned Javascript, the file parseNavdata.js in the node-ar-drone library contains code to parse navdata.
Video, TCP on port 5555: This is realtime video from the drone in an almost-but-not-quite H264 format known as PaVE. The format is documented in section 7.3 of the Developer Guide, and most libraries for talking to AR.Drones can parse the format.
Another thing you may notice:
FTP: The official controller app uses standard FTP to send an ephemeris file to the drone that contains info that helps GPS get a faster lock.
Suppose that I want to code an audio filter in C++ that is applied on every audio or to a specific microphone/source, where should I start with this on ubuntu ?
edit, to be clear I don't get how to do this and what is the role of Pulseaudio, ALSA and Gstreamer.
Alsa provides an API for accessing and controlling audio and MIDI hardware. One portion of ALSA is a series of kernel-mode device drivers, whilst the other is a user-space library that applications link against. Alsa is single-client.
PulseAudio is framework that facilitates multiple client applications accessing a single audio interface (alsa is single-client). It provides a daemon process which 'owns' the audio interface and provides a IPC transport for audio between the daemon and applications using it. This is used heavily in open source desktop environments. Use of Pulse is largely transparent to applications - they continue to access the audio input and output using the alsa API with audio transport and mixing. There is also Jack which is targeted more towards 'professional' audio applications - perhaps a bit of a misnomer, although what is meant here is low latency music production tools.
gStreamer is a general purpose multi-media framework based on the signal-graph pattern, in which components have a number of inputs and output pins and provide a transformation function. A Graph of these components is build to implement operations such as media decoding, with special nodes for audio and video input or output. It is similar in concept to CoreAudio and DirectShow. VLC and libAV are both open source alternatives that operate along similar lines. Your choice between these is a matter of API style, and implementation language. gStreamer, in particular, is an OO API implemented in C. VLC is C++.
The obvious way of implementing the problem you describe is to implement a gStreamer/libAV/VLC component. If you want to process the audio and then route it to another application, this can be achieved by looping it back through Pulse or Jack.
Alsa provides a plug-in mechanism, but I suspect that implementing this from the ALSA documentation will be tough going.
The de-facto architecture for building effects plug-ins of the type you describe is Steinberg's VST. There are plenty of open source hosts and examples of plug-ins that can be used on Linux, and crucially, there is decent documentation. As with a gStreamer/libAV/VLC, you be able to route audio in an out of this.
Out of these, VST is probably the easiest to pick up.