Embedded System: Which platform or board? - audio

I want to build a portable system / device for processing guitar effects (own pod)
I need:
touchscreen (as main input)
ssd / sd card or other?
intel pentium i3,5,7? (needs power for processing realtime effects
network (bluetooth, wlan, ethernet ...) (for control, configuration ...)
audio input / output (for the guitar / recording ...)
linux
transportable (no big power supply, with rechargeable batteries and also usb and normal power supply)
i would write the application running on the system with c++ in qt
I saw many interesting boards like arduino, beagleboard, hawkboard and some mini / micro itx embedded systems, but now i dont know anymore which one i should choose
should also be easy to develope for and not so expensive

There is an Arduino project doing that. Take a look. If you need more power then you will need something like ARM or some DSP microcontroller.

Related

Which microcontroller for fast high quality audio switching and playback

I'm building a device which will play high quality sound samples and will switch between samples in >5ms when a signal is applied.
I'm after a microcontroller which can allow this - I need 4 I/O pins for triggering the transistions between sounds, as well as the output pin(s) for the audio. The duration of the audio files will be 50ms or so but ideally would have enough storage to allow the files to be 1 second or longer. It will loop the current file until told to change. I don't want audiable pops or suchlike when switching files or running other commands - but there shouldn't be a need for anything complex to run beside it, it's purely audio playing and switching.
I've looked at various microcontrollers in the arduino family but they don't seem optimal for this purpose - (tried for example mozzi library for arduino but it's not fantastic quality). Ideally I could do it all on the chip (whatever it is, doesn't need to be arduino) - without needing external storage or RAM modules. But if that's neccessary I'll do it. The solution is to fit in a 2cm wide cylinder (but no length constraints) so would be ideally within that - so no SD card modules or whatever. Language wise - I'm fairly new to them all - but can learn whatever would be best.
Audio - (44.1kHz CD quality WAV, although could obviously switch to a different format if neccessary). If this is totally impossible to play such a high quality sound - then sound quality could be less.
Thank you for your help
For a simple application like this you would be best to just use a small ARM Cortex M device hooked up to an external SPI FLASH chip. Most microcontrollers scale processing power and RAM with FLASH storage so keeping it all on one chip will result in a grotesquely over-powered solution. Serial FLASH memory is very cheap, easy to use, and you can change the size in the future if you need to add more samples.
For the audio side if you really want CD quality you'll have to look at getting a external audio DAC as I don't know of any microcontrollers that integrate a CD quality codec. External DACs aren't expensive or complex to use, but just adds to the physical size and BOM cost. Many Cortex chips have built in 12-bit DACs though so if the audio has a reasonably small dynamic range you might find this is suitable for your needs.
In terms of minimising pops and clicks the Cortex devices will have enough power for some basic filtering to deal with this. I would recommend against Arduino though as you will quickly come up against processing power limitations and I doubt you will want to dive into assembler optimisations.

Getting ARM/WM8350 audio and power management working in linux

I have a rooted Sony prs900, running a linux 2.6.23 #2 PREEMPT kernel, for ARMv6. (Montavista linux kernel). I'm having problems with figuring out how power management works, both for running the system and for powering up and down the audio port.
I can neither figure out how to read the battery/powerline status information, nor get the audio chip to play sound, etc ... although I have been studying the kernel modules for a while...
It's worth a little money for help, say $100 paypal donation to an email account, (or more if this takes a long time...) for the first person able to explain to me how to do them in a way that works.
Eg: read battery status, and change some power modes like getting the audio amplifiers to power up/down so that the audio played to /dev/dsp (oss emulation) actually comes out as sound rather than just being consumed by the chip and ignored...
The actual sony kernel, and binary packages of cross compiler tools are located on the main page. Actual kernel sourcecode is also available.
What I have learned so far myself :
The sony is using a wolfson micro WM8350 audio driver and battery charger/power management chip for all the system's power; eg: it can power down/up the SD memory cards, send more power to the cpu, power up audio amplifiers, etc. See: WM8350 Datasheet.
Pretty much, the whole problem revolves around getting the WM8350 kernel drivers to work...
Although the company brags quite a bit about it's support under linux, they don't have any application notes or examples that are actually helpful that I can find, other than the datasheet. I suspect the kernel drivers I have are beta code, because they don't seem to be behaving well (several error messages in the kernel log about wm8350 registers not being readable happen at every boot even when running only the sony's native software...).
The kernel driver's source-code of most interest are in: linux-2.6.23_091126/drivers/mxc/pmic/{core,wm8350}
Notice, the wm8350 is a competitor to the MC14783, but the linux kernel drivers use the same {core} driver source code for both chips; The sony ONLY has the wm8350 on it -- there is no MC14783 present.
The code that I most want most desperately to understand how to make operate is found in the subdirectory {wm8350}, eg: wm8350/wm8350pm/power_supply_sysfs.c.
I want the audio to fire up too, but 'm not quite sure where the pertinent audio amplifier code is yet...
Very clearly the wm8350pm code is designed to export a /sys directory interface; right now /sys is mounted and operational on the system; but I'm not very familiar with the semantics of these newer style interfaces... they aren't quite like the old APM power interfaces for Linux laptops...
First I checked the obvious:
If I do a "cat /sys/power/state" it returns the word "mem" and nothing else.
The file has permissions -rw-r--r--, so potentially it could be written -- but I don't know with what. The string "mem" does not exist anywhere in the source code for the wm8350pm drivers, so I don't even know if /sys/power/state is part of the source code.
Doing a find /sys -iname "wm8350" reveals a handful of directories with the patterns:
wm8350-rtc , wm8350-pmic , wm8350-bl , wm8350-power , wm8350-led
wm8350-hifi-dai , wm8350-codec
wm8350-imx32ads.0
So, I do an ls-l on each directory, and look for actual files rather than symbolic links or subdirectories, and what I find are stock useless writable files: bind, unbind, uevent,
and a very few read only files: pmic_reg, dapm_widget, modalias, codec_reg which aren't very helpful.
It's no surprise that:
Doing: cat /sys/devices/platform/wm8350-ebx5016-audi/modalias gives "wm8350-ebx5016-audio"
Doing: cat /sys/devices/platform/wm8350-imx32ads.0/modalias gives "wm8350-imx32ads"
and since audio is off... Doing: cat /sys/devices/platform/wm8350-ebx5016-audi/dapm_widget reveals the audio state:
Headphone Jack: Off
Line In Jack: On
Mic Bias: Off
Left DAC: Off
Right DAC: Off
... (all else off and omitted except )...
EBX5016-hifi: PM State: D3hot
The last two files, I expect should do wm8350 chip register dumps... and one did.
Doing: cat /sys/devices/wm8350-pmic/pmic-reg causes a long pause, then nothing is printed.
but:
Doing: cat /sys/devices/wm8350/platform/wm8350-ebx5016-audi/wm8350-codec/codec_reg does prints a list of registers up to e8 which is just a few bytes larger than the datasheet says the chip should be (0x00 to 0xe6).
I tried using a python program to play wav files, (works on my desktop computer), and I noticed that /dev/dsp does open, the mixers DO set volume levels, and nothing comes out. So -- the audio driver is not able to enable the sound amplifiers on it's own automatically.
There are no alsa sound files in /dev, nor are any alsa tools found on the embedded machine... so I assume Sony is strictly using OSS /dev/dsp and /dev/mixer.
There is only one other access point I can find to the ws8350:
There IS a device driver /dev/wm8350.
That driver created by the source code in subdirectory wm8350/wm8350_reg.c ; in theory it should be able to read and write to all registers using ioctls() calls from a user space. However, something appears grossly wrong with it, for I wrote a test program to read the wm8350 registers... and most of the registers return error messages rather than allowing to be read, including the most pulic ID registers (0x00, 0x01) etc.
So, I'm quite stuck. Pointers, thoughts, hints, are quite desired.
I would like to change your question a little bit.
How does Linux ASOC (alsa system on chip) power management work?
I will answer this and then give some hints on using this specific chip.
.. If I do a cat /sys/power/state it returns the word "mem" and nothing else. The file has permissions -rw-r--r--, so potentially it could be written -- but I don't know with what. The string "mem" does not exist anywhere in the source code for the wm8350pm drivers, so I don't even know if /sys/power/state is part of the source code.
You need to get an understanding of the Linux driver model. Hardware in Linux is structured like a tree. The rational is that things must be powered up/down in specific sequences. For instance, you should not power down the PCI bus controller before powering down the PCI peripherals. Linux builds a tree of hardware and each driver (code) and device (data/actual hardware) has specific call backs/function pointers which handle some specific tasks.
probe - Are you there? Determines actual hardware/device is present.
remove - Shuts down device. Module removal, power off, etc.
suspend - going to sleep.
resume - waking up.
Three and four may look interesting to you. Now, to read about what /sys/power/state is about. The text mem, means that suspend to memory is supported by your system. In this mode, Linux does these steps,
Find first lowest level active bus.
suspend devices on that bus.
suspend bus and de-activate.
If a bus is active go to step 1.
Set CPU to low power state (suspend to RAM).
This is not quite the full story. A few devices may support a wake-up. They will have extra call-backs to enable waking the system from sleep modes. Read the documentation to find out about this.
That is general power management and driver/device structure. Now, how is the ASOC (alsa system on chip) structured?
There are typically three drivers/devices that get stitched together.
Codec - The wm8350 in your case. This includes audio amplifier drive circuitry and can include sound mixing and source controls. Supports digital to analog and analog to digital, typically through an i2s interface. The i2s is not the only interface. Usually a register bank is controlled through a secondary interface; i2c in the wm8350 case.
DAI - Refer to chapter 1.2.18.1 of the iMx31 reference manual; the hardware is called the SSI by Freescale. The next chapter on the AUDMUX is also useful to understand audio support on the iMx31/32.
Machine file - this is the board specific routing. It hooks the DAI to the codec and is the parent of both. It provides board clocking information and other specific configuration. For instance, it may use the AUDMUX to route the physical pins to the SSI block.
An i2c (or SPI) interface from the codec driver to send control commands to the coded chip. Some chips might uses a wacky i2s interface or something else for control (but not in your case).
Now if you understood this, you will see that some features of the wm8350 seem to break the Linux model. The DAI interface can be stopped (digital audio), but the i2c interface must remain alive to program the registers related to the power functionality in the codec/PMIC (power management IC).
The latest WM8350 calls the IC a multi-function device and support was introduced in 2.6.35. The initial support may not have included the WM8350 features. Unfortunately, without some details on the layout of the Sony prs900 board, it would be difficult to know how to use the WM8350 PMIC functionality. The code will involve the iMx31 CPU, the WM8350, the i2c connection, and possibly some power supply circuitry.
For certain, you can just try echo mem > /sys/power/state and see what happens. If it works, you are lucky. The power/current consumption in sleep might not be optimal, but it will probably be hard to fix with the 2.6.23 kernel. You will want to look through the /sys directories for wake-up sources and possibly register these before issuing the suspend to memory command.
I can neither figure out how to read the battery/powerline status information, nor get the audio chip to play sound, etc ... although I have been studying the kernel modules for a while...
From the above discussions, the battery and powerline status will probably be found through another device. However, the pmic_reg file may actually give the status if things are connected properly on the board.
The audio chip will use ALSA. You need to use either alsamixer or the command line amixer to set up audio routes through the codec, so the DAI channel (PCM from iMx32) is routed and sent to the speaker. To minimize power consumption, things are usually turned off by default. The /dev/dsp files are just OSS compatibility. This configuration will support ALSA natively. You are better off to use ALSA if possible.
Donate to the OSF and get a tax receipt, if this was helpful enough.

Interfacing with a custom camera in Linux on the beaglebone

I need some pointers to help me get a camera (OV7670) working on a beaglebone running debian.
Using the camera capes as a guide, I have connected the camera to the GPMC pins on the beagleboard and the I2C pins. However I am a bit stumped to what I need to do in software to get Linux to recognize it as a camera and be able to use it to read a frame from the GPMC.
From the readings I have done, it seems like I need to load a kernel module. I found that there is a OV7670 C driver file in the kernel source. What kind of modifications (if any) would I need to do?
I am also open to any suggested readings and tutorials that would help me.
The status of the C driver for AM335X devices:"/arch/arm/configs/am335x_evm_defconfig: # CONFIG_VIDEO_OV7670 is not set"Looks like you'll need to compile your own kernel with OV7670 enabled, or...
As an alternative, you could write your own simple driver using one of the bones two on-board programmable real-time units (PRUs). You'll need to become familiar with assembly but that shouldn't take anymore than 2-3hrs of dedicated reading and you only need to do it once. The PRUs run on a 200MHz clock so every instruction is 5ns - more than enough time to generate a clock for the OV7670 and OV5642. (I've created a collection of examples on GitHub for the PRU: https://github.com/TekuConcept/PRU_Demo - currently working on a driver for three OV5642 cameras on one bone for the AUVSI-robonation annual competition)
Another alternative is the LogiBone which is an FPGA cape for the bone. You may need to become familiar with Verilog for this one; nevertheless the developers I talked with said they have an add-on available for OmniVision cameras and are working on implementing various OpenCV software features.
As far as reading, there is nothing better than a well-documented datasheet!
http://www.voti.nl/docs/OV7670.pdf
http://www.phytec.com/wiki/images/7/72/AM335x_techincal_reference_manual.pdf
http://processors.wiki.ti.com/index.php/PRU_Assembly_Instructions
"Verilog for Digital Design" by Frank & Roman

Can a high frequency hardware input be accurately read under a windows or windows CE environment?

This question straddles the blurry line between software and hardware so I hope the community feels it belongs on stackoverflow otherwise I would appreciate some advice as to where to move it.
My team is currently in the process of developing a robotic controller and although most of our I/O requirements are pretty tame we have one specific input that occasionally triggers at a 2 microsecond interval (Either a digital rise or fall). We need to be able to accurately timestamp that input and record it. We do not need to act on it in any sort of immediate way. We feel as if we should be able to take advantage of a high speed USB or PCI-E bus in order to facilitate this kind of high speed input but we lack expertise in this area and do not know what a solution to this problem would look like or if it even can be achieved without expensive/very invovled proprietary solutions. In addition there is a desire in our group to move away from Windows CE to a Windows Embedded (probably XP) x86 based platform however if this is something that could only be achieved on a CE system that would be important to know.
In summary:
Is there a way to read and timestamp accurately a 2 microsecond (500 KHz) input under a windows or windows CE environment?
Bonus points for implementation ideas/visions/methods!
Thank you!
I wouldnt try to do it directly in windows? What GPIO do you have anyway on your windows machine? For around or under $20 there are a long list of solutions that are either serial or usb based (of the shelf microcontroller boards) which you could program to have their only job to be to watch that I/O and time stamp it and then casually send this data over usb or serial (or other) to the host.
I am thinking along the lines of the msp430 launchpad, or stellaris lauchpad, or stm32f0 discovery or stm32f4 discovery, the family of arduinos, mbed, leaf labs maple mini, and on and on.
To do it properly under windows you would need to add some hardware anyway...

Graphics development on ARM

I am planning to make a small OS and run a Tetris clone on it using an ARM Cortex-M3. Unfortunately, I am not able to buy any development boards as of now, so I will have to use simulators.
I have actually looked into QEMU which has LM3S6965EVB support, which contains an ARM Cortex-M3 processor. But apparently newer revisions of the board are not compatible with the model in QEMU as none of the examples I have downloaded from TI seem to work. Even the OLED display is different.
Another problem is to do graphics development as the OLED display for LM3S6965EVB has a really low resolution. I was able to get it up to 640x480 by editing the QEMU source but as I could not get any examples to work, so I don't know if it works either. Using the debug parameters for SSD0323, all I can see is that it accepts some of the data that is sent to initialize the device, then hangs...
I have considered choosing another board in QEMU but that would mean redoing many things from scratch when I get my hands on a real device, as the other ones are too powerful for something as simple as this.
What should I do? Are there any other simulators out there that can help me accomplish what I am trying to do? I want to develop a small OS and some small games.
Thanks in advance. I have been searching for a solution for days and I am really stuck.
How much you have to redo, in part, has to do with your software/system engineering, you can abstract where needed and only have to re-write the abstraction layer not the entire package. Actually you can do much of your software design/testing on your host system and never cross compile, only later cross compile to a simulator or real hardware.
For example, I assume you would construct the next video screen somewhere in ram then depending on the hardware change some bits in a register and page flip or have to do a copy from this frame buffer to the video/lcd in whatever form it wants. Using thumbulator you could build your screen in ram somewhere (in the simulated memory space) then add to the simulator when the simulation writes to such and such register take these bytes from ram and display them on the host computer (running the simulation) basically simulate some hardware. use sdl or basic X calls or whatever you prefer. I normally take snapshots to .bmp files (very easy to write) then look at them later.
Later, on hardware your abstracted update_screen() function would have hardware specific code to display that screen.
thumbulator only runs thumb instructions not ARM and not Thumb2, thumb being the common denominator between the arm processors (ARMv4T and newer except for cortex-m) and those that support thumb2 extensions (cortex-m). other than startup code the compiling and programming experience is the same across the arm family. the code (other than startup code and of course hardware specific accesses) will run across the arm family as well as the simulator. If you go to a cortex-m then adding an architecture specification to the command line will change the build from thumb only to thumb+thumb2 instructions giving you some performance boost. if you surf around my other projects on github you will find this idea reapeated over and over again, I have many simple cortex-m examples where I use gcc and llvm and build the same .C code with thumb instructions and thumb+thumb2.
Another completely different answer is get a GBA (Nintendo Game Boy Advance). You can get a GBA SP (has a backlit display, makes the whole experience better) for about $30 or so on ebay. You can buy flash cartridges that take sd cards for about the same amount. It has an ARM7TDMI, it runs thumb code much faster than ARM code, giving you that thumb experience in preparation for other/newer cores like the cortex-m. For another $30 you can get a game link cable, chop it up, attach a rs232 level shifter (I can talk you through all of this), and make a gba serial cable. My preferred setup is to have a flash cartridge that I have pre-programmed with a serial bootloader, I download the program over serial into ram then run from ram. This avoids having to yank the flash cartridge and/or sd card every time you re-compile the program. doable, and a cheaper solution but gets tiring fast.
If you have a Nintendo DS for $12 to $15 you can get an sd based flash cartridge that you can likewise use for development. I recommend learning the gba first, which you can do on the NDS if you buy a gba side memory cartridge (need a ds lite not an ndsi nor 3d) supported by the software on the cartridge. (the ez flash 3 in 1 gba size for example is a good one, as well as memory you can flash that one with the nds and carry it over to the gba (this is how I put my serial bootloader on it)). these loaders will let you put your .gba file on the nds cartridge sd card then load it into the gba cartridge and it switches the nds into gba mode and runs as a gba.
there are lots of other solutions, sparkfun.com likely has a number of arm based boards that can drive lcds and/or come with lcds. You can go to earthlcd and get one of the serial based lcd panels that make for rapid development, later of course a cheaper solution is desired. Along the same lines you could instead simulate an earthlcd like thing using your host computer have the embedded microcontroller send screen updates over serial to the host and the host displays the graphics. Later replace that screen update with something else.
This latter solution, for about $20 you can get a stm32f4 discovery board, has a cortex-m4, runs up to 168MHz, has a number of serial ports of which at least two have pins not being used by something else you could easily have one port for debug messages and the other for this virtual serial screen. In the stm32f4 directory in my stm32vld repo on github I have a number of getting started examples for using that board (as well as the stm32vld which is a few bucks cheaper but not as powerful as this stm32f4). Likewise your host application can take keystrokes and turn them into user control/game control commands back into the game software on the microcontroller.
There is of course the beagleboard or hawkboard or raspberri pi when it comes out, or open-rd (I dont like the plug computer but do like the open rd) which have video processing and video output direct to a monitor and/or tv using composite or whatever. About $150 to $200 and it just works run with it. You definitely dont need to run linux on these platforms, you can make your own os or whatever you like and run that, very simple.
There are more solutions than you probably have time and/or money to pursue you need to find one that fits within your comfort or happyness zone for how you like to do development and try that path.

Resources