Raspberry Pi program that interfaces with I/O (HID, SPI, GPIO) - linux

I am working on a project using a Raspberry Pi where I need to create a GUI that can interface with the various peripherals. Specifically the program needs to:
-read the position data from a touchscreen, mouse, or other HID
-perform some math on on the position data
-store the data in a FIFO buffer
-output that data over the SPI port at a fixed frame rate
I have an electronics background, with some experience writing firmware for microcontrollers, but I'm a relative novice when it comes to this kind of stuff. I have done some research, and it looks like the mouse/touchcreen data is available by reading a file in the /dev/input directory, and I assume you can access the SPI port by reading or writing to some file in the /dev directory. (I am currently running Raspbian)
My initial thoughts were to write a simple C program that a) reads the touchscreen data file and stores it into a preallocated buffer in memory and b) writes the data that next in line to the SPI data file in order to write it out to the SPI port. Part (a) would fire off either on a timer interrupt and poll the touchcreen data to see if it's new, or fire off from a "new data" interrupt. Part (b), the SPI part of the program, would fire off at a fixed rate, probably from a timer interrupt.
So my questions are:
-Is it even possible (or easy) to access interrupts or system timers from the user space? I assume you would need to get hooks into the kernel somehow?
-If so, how would I make it so that my function runs whenever a interrupt or timer fires off?
-How easy is it to use the SPI DMA in user space? Does anyone have any experience doing that? I did a little research and it looks like you need to load a custom kernel driver, but I didn't know how tricky that would be.
-Is it possible to write out a parallel word on the GPIOs the way you would do it on a microcontroller (ie all at once by writing the word to the port's output register)?
I know there are plenty of higher level programming languages and wrappers that allow you to talk to the perepherals, but I'm a little hesitant doing it that way since the timing seems pretty tight. I need to output (3) 16-bit words per frame at ~1k frames per second. At 500kHz SPI bitrate, each frame is 96 us long, and at 1k frames per second, each period is 1ms. This is why SPI DMA, or even writing the data out as a parallel word would be a lot easier in terms of timing.
Thanks for the help!

Related

Getting ARM/WM8350 audio and power management working in linux

I have a rooted Sony prs900, running a linux 2.6.23 #2 PREEMPT kernel, for ARMv6. (Montavista linux kernel). I'm having problems with figuring out how power management works, both for running the system and for powering up and down the audio port.
I can neither figure out how to read the battery/powerline status information, nor get the audio chip to play sound, etc ... although I have been studying the kernel modules for a while...
It's worth a little money for help, say $100 paypal donation to an email account, (or more if this takes a long time...) for the first person able to explain to me how to do them in a way that works.
Eg: read battery status, and change some power modes like getting the audio amplifiers to power up/down so that the audio played to /dev/dsp (oss emulation) actually comes out as sound rather than just being consumed by the chip and ignored...
The actual sony kernel, and binary packages of cross compiler tools are located on the main page. Actual kernel sourcecode is also available.
What I have learned so far myself :
The sony is using a wolfson micro WM8350 audio driver and battery charger/power management chip for all the system's power; eg: it can power down/up the SD memory cards, send more power to the cpu, power up audio amplifiers, etc. See: WM8350 Datasheet.
Pretty much, the whole problem revolves around getting the WM8350 kernel drivers to work...
Although the company brags quite a bit about it's support under linux, they don't have any application notes or examples that are actually helpful that I can find, other than the datasheet. I suspect the kernel drivers I have are beta code, because they don't seem to be behaving well (several error messages in the kernel log about wm8350 registers not being readable happen at every boot even when running only the sony's native software...).
The kernel driver's source-code of most interest are in: linux-2.6.23_091126/drivers/mxc/pmic/{core,wm8350}
Notice, the wm8350 is a competitor to the MC14783, but the linux kernel drivers use the same {core} driver source code for both chips; The sony ONLY has the wm8350 on it -- there is no MC14783 present.
The code that I most want most desperately to understand how to make operate is found in the subdirectory {wm8350}, eg: wm8350/wm8350pm/power_supply_sysfs.c.
I want the audio to fire up too, but 'm not quite sure where the pertinent audio amplifier code is yet...
Very clearly the wm8350pm code is designed to export a /sys directory interface; right now /sys is mounted and operational on the system; but I'm not very familiar with the semantics of these newer style interfaces... they aren't quite like the old APM power interfaces for Linux laptops...
First I checked the obvious:
If I do a "cat /sys/power/state" it returns the word "mem" and nothing else.
The file has permissions -rw-r--r--, so potentially it could be written -- but I don't know with what. The string "mem" does not exist anywhere in the source code for the wm8350pm drivers, so I don't even know if /sys/power/state is part of the source code.
Doing a find /sys -iname "wm8350" reveals a handful of directories with the patterns:
wm8350-rtc , wm8350-pmic , wm8350-bl , wm8350-power , wm8350-led
wm8350-hifi-dai , wm8350-codec
wm8350-imx32ads.0
So, I do an ls-l on each directory, and look for actual files rather than symbolic links or subdirectories, and what I find are stock useless writable files: bind, unbind, uevent,
and a very few read only files: pmic_reg, dapm_widget, modalias, codec_reg which aren't very helpful.
It's no surprise that:
Doing: cat /sys/devices/platform/wm8350-ebx5016-audi/modalias gives "wm8350-ebx5016-audio"
Doing: cat /sys/devices/platform/wm8350-imx32ads.0/modalias gives "wm8350-imx32ads"
and since audio is off... Doing: cat /sys/devices/platform/wm8350-ebx5016-audi/dapm_widget reveals the audio state:
Headphone Jack: Off
Line In Jack: On
Mic Bias: Off
Left DAC: Off
Right DAC: Off
... (all else off and omitted except )...
EBX5016-hifi: PM State: D3hot
The last two files, I expect should do wm8350 chip register dumps... and one did.
Doing: cat /sys/devices/wm8350-pmic/pmic-reg causes a long pause, then nothing is printed.
but:
Doing: cat /sys/devices/wm8350/platform/wm8350-ebx5016-audi/wm8350-codec/codec_reg does prints a list of registers up to e8 which is just a few bytes larger than the datasheet says the chip should be (0x00 to 0xe6).
I tried using a python program to play wav files, (works on my desktop computer), and I noticed that /dev/dsp does open, the mixers DO set volume levels, and nothing comes out. So -- the audio driver is not able to enable the sound amplifiers on it's own automatically.
There are no alsa sound files in /dev, nor are any alsa tools found on the embedded machine... so I assume Sony is strictly using OSS /dev/dsp and /dev/mixer.
There is only one other access point I can find to the ws8350:
There IS a device driver /dev/wm8350.
That driver created by the source code in subdirectory wm8350/wm8350_reg.c ; in theory it should be able to read and write to all registers using ioctls() calls from a user space. However, something appears grossly wrong with it, for I wrote a test program to read the wm8350 registers... and most of the registers return error messages rather than allowing to be read, including the most pulic ID registers (0x00, 0x01) etc.
So, I'm quite stuck. Pointers, thoughts, hints, are quite desired.
I would like to change your question a little bit.
How does Linux ASOC (alsa system on chip) power management work?
I will answer this and then give some hints on using this specific chip.
.. If I do a cat /sys/power/state it returns the word "mem" and nothing else. The file has permissions -rw-r--r--, so potentially it could be written -- but I don't know with what. The string "mem" does not exist anywhere in the source code for the wm8350pm drivers, so I don't even know if /sys/power/state is part of the source code.
You need to get an understanding of the Linux driver model. Hardware in Linux is structured like a tree. The rational is that things must be powered up/down in specific sequences. For instance, you should not power down the PCI bus controller before powering down the PCI peripherals. Linux builds a tree of hardware and each driver (code) and device (data/actual hardware) has specific call backs/function pointers which handle some specific tasks.
probe - Are you there? Determines actual hardware/device is present.
remove - Shuts down device. Module removal, power off, etc.
suspend - going to sleep.
resume - waking up.
Three and four may look interesting to you. Now, to read about what /sys/power/state is about. The text mem, means that suspend to memory is supported by your system. In this mode, Linux does these steps,
Find first lowest level active bus.
suspend devices on that bus.
suspend bus and de-activate.
If a bus is active go to step 1.
Set CPU to low power state (suspend to RAM).
This is not quite the full story. A few devices may support a wake-up. They will have extra call-backs to enable waking the system from sleep modes. Read the documentation to find out about this.
That is general power management and driver/device structure. Now, how is the ASOC (alsa system on chip) structured?
There are typically three drivers/devices that get stitched together.
Codec - The wm8350 in your case. This includes audio amplifier drive circuitry and can include sound mixing and source controls. Supports digital to analog and analog to digital, typically through an i2s interface. The i2s is not the only interface. Usually a register bank is controlled through a secondary interface; i2c in the wm8350 case.
DAI - Refer to chapter 1.2.18.1 of the iMx31 reference manual; the hardware is called the SSI by Freescale. The next chapter on the AUDMUX is also useful to understand audio support on the iMx31/32.
Machine file - this is the board specific routing. It hooks the DAI to the codec and is the parent of both. It provides board clocking information and other specific configuration. For instance, it may use the AUDMUX to route the physical pins to the SSI block.
An i2c (or SPI) interface from the codec driver to send control commands to the coded chip. Some chips might uses a wacky i2s interface or something else for control (but not in your case).
Now if you understood this, you will see that some features of the wm8350 seem to break the Linux model. The DAI interface can be stopped (digital audio), but the i2c interface must remain alive to program the registers related to the power functionality in the codec/PMIC (power management IC).
The latest WM8350 calls the IC a multi-function device and support was introduced in 2.6.35. The initial support may not have included the WM8350 features. Unfortunately, without some details on the layout of the Sony prs900 board, it would be difficult to know how to use the WM8350 PMIC functionality. The code will involve the iMx31 CPU, the WM8350, the i2c connection, and possibly some power supply circuitry.
For certain, you can just try echo mem > /sys/power/state and see what happens. If it works, you are lucky. The power/current consumption in sleep might not be optimal, but it will probably be hard to fix with the 2.6.23 kernel. You will want to look through the /sys directories for wake-up sources and possibly register these before issuing the suspend to memory command.
I can neither figure out how to read the battery/powerline status information, nor get the audio chip to play sound, etc ... although I have been studying the kernel modules for a while...
From the above discussions, the battery and powerline status will probably be found through another device. However, the pmic_reg file may actually give the status if things are connected properly on the board.
The audio chip will use ALSA. You need to use either alsamixer or the command line amixer to set up audio routes through the codec, so the DAI channel (PCM from iMx32) is routed and sent to the speaker. To minimize power consumption, things are usually turned off by default. The /dev/dsp files are just OSS compatibility. This configuration will support ALSA natively. You are better off to use ALSA if possible.
Donate to the OSF and get a tax receipt, if this was helpful enough.

SPI: Linux driver model

I'm new with SPI; the Linux kernel provides an API for declaring SPI busses and devices, and managing them according to the standard Linux driver model.
You can find the description of the struct spi_master here: https://www.kernel.org/doc/htmldocs/device-drivers/API-struct-spi-master.html
The description at the link above says that "Each device may be configured to use a different clock rate, since those shared signals are ignored unless the chip is selected". To put the sentence in a contest, I have to say that with "device" they mean SPI slave device, and with "those shared signals" they mean MOSI, MISO and SCK signals.
In fact, in the struct spi_device (https://www.kernel.org/doc/htmldocs/device-drivers/API-struct-spi-device.html) there is an attribute named max_speed_hz that is not present in the struct spi_master. So I can understand on the first part of the statement above: "Each device may be configured to use a different clock rate".
But, what does mean the second part? Does "since those shared signals are ignored unless the chip is selected" mean that I'm allowed to used different clock rates but only one at time by enabling/disabling the slaves with different rates?
Thank you for your help! Regards,
--
Matteo
#Matteo M.: I think you actually are not allowed to simultaeously setting SS1, SS2 and SS3 to zero and in this way enabling all three SPI slaves at the same moment in time. The reason is, that the SPI slaves, while receiving data on the MOSI line, simultaneosly send back data on the MISO line. If actually all three slaves would put data on the (shared) MOSI line, really bad things could happen, both regarding the data and the electrical flow.
SPI is a very loose "standard", there are not many rules to follow, which is good (and bad I guess). It is good because it is flexible. It is bad because it can be implemented differently depending on the specific hardware you are dealing with. Some devices support only half-duplex communication, which as you know requires coordination of when the bus can be driven. Select lines (chip enable, slave select, whatever you want to call them) provide a handy way to do this without using bits to identify which slave should get a message off the bus.
In the full-duplex mode, where data is dropped onto the bus from the master and from the slave on each clock pulse, the select line could be very much required to prevent bad things, as Wolfgang stated. I want to stress could be required; it is completely reasonable to have perhaps a master processor communicating with other processors that only drive the bus when responding to some specific bit pattern (like for instance, an "address")... More software/firmware? Yeah, but it is not stopping you.
So, if your 8-bit slaves are say, for instance 8-bit DACs you could indeed write the values chunks of the master data register. Independant select lines will enable you to do this without all those slaves driving the bus at once. Yeah, you have to shift the values from each slave into the master register one at a time, but that too is a completely reasonable design.
Unlike some more complex serial protocols SPI is actually really flexible; because it doesn't lock you into max word size or require any of the data you write to the bus to be comprised of things like addresses and offsets and stuff.

How to generate a steady 37kHz GPIO trigger from inside linux kernel?

I have a micro controller taking care of infrared TX-carrier wave generation currently, but I started wondering if I could dispose of it, and do this work in linux side - thus bringing the cost of my embedded system down.
I'm running on a Freescale i.mx233 (454MHz ARM9), and if I access registry directly through /dev/mem, I can achieve quite steady 5MHz triggering to a GPIO pin.
Since I need 37kHz, I started looking ways of slowing it down, but it seems that at least nanowait() is way too rough for this purpose.
I found one solution of calling rand() in a for loop, and I seem to be able to generate 38,4kHz signal quite well, However there is some unacceptable jitter from time to time according to oscilloscope. (I understand that this is quite a bit waste of resources, but when the TX needs to be done, the system has no other tasks really)
My questions:
Freescales kernel code (3.8 branch) doesn't have CONFIG_PREEMPT_RT patches, so that is one thing maybe I should look into, but before that:
Could I achieve more accurate performance, by writing a kernel module to drive the GPIO from inside the kernel ? I do need to read up on some data from user space (data to be sent), but other than that, I only need to trigger the led on specified frequency at the end of the GPIO, so the driver should be pretty simple.
Can I force the priority of my driver, so that other tasks don't interrupt this gpio triggering ? (data sending takes currently roughly 400ms, and it's done very seldom)
Is there some better way to create an interrupt say every 37kHz, so that I don't stall the system by SW ?
Micro controller is perfect for this kind of tasks, but it would be nice to avoid this cost overhead if possible...
The i.MX23 PWM in "Multi-Chip Attachment Mode" is designed exactly for this requirement.
Use one of the PWM's in "Multi-Chip Attachment Mode", for example, assuming you are using a 24Mhz clock, with
MATT=1 (Enable multi-chip attachment mode)
MATT_SEL=1 (User 24Mhz clock)
CDIV=0x2 (or DIV_4, i.e. divide by 4)
INACTIVE_STATE=0x2 or 0x3
ACTIVE_STATE=0x3 or 0x2
PERIOD=175 (i.e 176-1)
If you use a 32Mhz clock you will need other CDIV and PERIOD parameters to get to 34Khz.
See the "i.MX23 Applications Processor Reference Manual" for example code. If I am not mistaken the driver code is in arch/arm/plat-mxc/pwm.c but it doesn't seem to support the MATT mode. You will probably have to extend the code yourself.
Regarding the implementation -
The above answer relates to the CPU only. In practice, the ability to implement the idea depends on the board design. The board would need a header (pins for external connection) that connects to a GPIO pin that can be connected via the pinmux to one of the PWMs. I would assume that most reference designs would have at least one PWM configurable GPIO exposed through a header. The the question is if there is only one and if you are already using it for some other control purpose.
After determining that there is a header with a free PWM configurable GPIO, you need to configure the pin mux and activate the PWM. There are instructions for this in the processor reference manual noted above. Most systems do this configuration in the boot loader board_init() (assuming U-boot), although it can probably be done in userspace also with some mmap trickery after Linux boots.
Finally you would need to write a driver based on the interface to the PWM module in platform-mxc_pwm.c.
If you are using the i.MX23 EVK 10.05 you might be able to modify the LED PWM driver since it is already configured at the level of the bootloader and kernel and connect your device to the LED output instead of the LED. (You will need a hardware technician to help you with this.) Make sure you config the kernel with the CONFIG_LEDS_MXS.
The above comments regarding implementation are somewhat speculative since I don't know the EVK. Perhaps someone who knows it can improve on this.
Update September 21, 2013
Another way to generate a 37kHz signal with the i.MX23 or with any SoC with a similar ARM CPU core is to use an unused on-chip timer to generate a FIQ interrupt at the required frequency and write a FIQ interrupt handler to toggle a GPIO pin. Maxime Ripard posted a complete example of this method using the i.MX28 SoC on his Free Electrons blog on April 30 this year. To use this method you will need both an unused timer and not be using the FIQ interrupt for another purpose such as one of the SPI, camera, or brownout-detection drivers that use the ARM FIQ. You will also need to write the ISR in ARM assembler.
The best way to get a 37 kHz signal would be to find some serial/audio/PWM output that can generate it in hardware.
It might be possible to raise the priority of your userspace process, but this won't help against interrupts or high-priority kernel tasks.
An RT kernel would allow you to get priority over more kernel tasks, but wouldn't help against all interrupts.
I don't know if you will be able to get the maximum latency below 37 kHz (27 µs); I think it's unlikely.
Doing this in the kernel would help because you could disable interrupt handling.
However, disabling interrupts for as long as 400 ms is frowned upon.

Does windows have a interrupt-context?

I have recently started reading Linux Kernel Development By Robert Love and I am Love -ing it!
Please read the below excerpt from the book to better understand my questions:
A number identifies interrupts and the kernel uses
this number to execute a specific interrupt handler to process and respond to the interrupt.
For example, as you type, the keyboard controller issues an interrupt to let the system
know that there is new data in the keyboard buffer. The kernel notes the interrupt number of the incoming interrupt and executes the correct interrupt handler.The interrupt
handler processes the keyboard data and lets the keyboard controller know it is ready for
more data...
Now I have dual boot on my machine and sometimes (in fact,many) when I type something on windows, I find myself doing it in, what I call Night crawler mode. This is when I am typing and I don't see anything on the screen and later after a while the entire text comes in one flash, probably the buffer just spits everything out.
Now I don't see this happening on Linux. Is it because of the interrupt-context present in Linux and the absence of it in windows?
BTW, I am still not sure if there is an interrupt-context in windows, google didn't give me any relevant results for that.
All OSes have an interrupt context, it's a feature/constraint of the CPU architecture -- basically, this is "just the way things work" with computer hardware. Different OSes (and drivers within that OS) make different choices about what work and how much work to do in the interrupt before returning, though. That may be related to your windows experience, or it may not. There is a lot of code involved in getting a key press translated into screen output, and interrupt handling is only a tiny part.
A number identifies interrupts and the kernel uses this number to execute a specific interrupt handler to process and respond to the interrupt. For example, as you type, the keyboard controller issues an interrupt to let the system know that there is new data in the keyboard buffer.The kernel notes the interrupt num- ber of the incoming interrupt and executes the correct interrupt handler.The interrupt handler processes the keyboard data and lets the keyboard controller know it is ready for more data
This is a pretty poor description. Things might be different now with USB keyboards, but this seems to discuss what would happen with an old PS/2 connection, where an "8042"-compatible chipset on your motherboard signals on an IRQ line to the CPU, which then executes whatever code is at the address stored in location 9 in the interrupt table (traditionally an array of pointers starting at address 0 in physical memory, though from memory you could change the address, and last time I played with this stuff PCs still had <1MB RAM and used different memory layout modes).
That dispatch process has nothing to do with the kernel... it's the way the hardware works. (The keyboard controller could be asked not to generate interrupts, allowing OS/driver software to "poll" it regularly to see if there happened to be new event data available, but it'd be pretty crazy to use that really).
Still, the code address from the interrupt table will point into the kernel or keyboard driver, and the kernel/driver code will read the keyboard event data from the keyboad controller's I/O port. For these hardware interrupt handlers, a primary goal is to get the data from the device and store it into a buffer as quickly as possible - both to ensure a return from the interrupt to whatever processing was happening, and because the keyboard controller can only handle one event at a time - it needs to be read off into the buffer before the next event.
It's then up to the OS/driver to either provide some kind of input availability signal to application software, or wait for the application software to attempt to read more keyboard input, but it can do it a "whenever you're ready" fashion. Whichever way, once an application has time to read and start responding to the input, things can happen that mean it takes an unexpectedly long amount of time: it could be that the extra keystroke triggers some complex repagination algorithm that takes a long time to run, or that the keystroke results in the program executing code that has been swapped out to disk (check wikipedia for "virtual memory"), in which case it could be only after the hard disk has read part of the program into memory that the program can continue to run. There are thousands of such edge cases involving window movement, graphics clipping algorithms, etc. that could account for the keyboard-handling code taking a long time to complete, and if other keystrokes have happened meanwhile they'll be read by the keyboard driver into that buffer, then only "perceived" by the application after the slow/blocking processing completes. It may well be that the processing consequent to all the keystrokes then in the buffer completes much more quickly: for example, if part of the program was swapped in from disk, that part may be ready to process the remaining keystrokes.
Why would Linux do better at this than Windows? Mainly because the Operating System, drivers and applications tend to be "leaner and meaner"... less bloated software (like C++ vs C# .NET), less wasted memory, so less swapping and delays.

Is forcing I2C communication safe?

For a project I'm working on I have to talk to a multi-function chip via I2C. I can do this from linux user-space via the I2C /dev/i2c-1 interface.
However, It seems that a driver is talking to the same chip at the same time. This results in my I2C_SLAVE accesses to fail with An errno-value of EBUSY. Well - I can override this via the ioctl I2C_SLAVE_FORCE. I tried it, and it works. My commands reach the chip.
Question: Is it safe to do this? I know for sure that the address-ranges that I write are never accessed by any kernel-driver. However, I am not sure if forcing I2C communication that way may confuse some internal state-machine or so.(I'm not that into I2C, I just use it...)
For reference, the hardware facts:
OS: Linux
Architecture: TI OMAP3 3530
I2C-Chip: TWL4030 (does power, audio, usb and lots of other things..)
I don't know that particular chip, but often you have commands that require a sequence of writes, first to one address to set a certain mode, then you read or write another address -- where the function of the second address changes based on what you wrote to the first one. So if the driver is in the middle of one of those operations, and you interrupt it (or vice versa), you have a race condition that will be difficult to debug. For a reliable solution, you better communicate through the chip's driver...
I mostly agree with #Wim. But I would like to add that this can definitely cause irreversible problems, or destruction, depending on the device.
I know of a Gyroscope (L3GD20) that requires that you don't write to certain locations. The way that the chip is setup, these locations contain manufacturer's settings which determine how the device functions and performs.
This may seem like an easy problem to avoid, but if you think about how I2C works, all of the bytes are passed one bit at a time. If you interrupt in the middle of the transmission of another byte, results can not only be truly unpredictable, but they can also increase the risk of permanent damage exponentially. This is, of course, entirely up to the chip on how to handle the problem.
Since microcontrollers tend to operate at speeds much faster than the bus speeds allowed on I2C, and since the bus speeds themselves are dynamic based on the speeds at which devices process the information, the best bet is to insert pauses or loops between transmissions that wait for things to finish. If you have to, you can even insert a timeout. If these pauses aren't working, then something is wrong with the implementation.

Resources