Hello I want to record 8 channels from TIDA-01454 CMB into a Beaglebone AI. As the CMB is built with two PCM1864 ADCs and it is also a Beagle board, I followed this guide(https://www.ti.com/lit/an/sprac97/sprac97.pdf) with some changes(channels_max=16) in order to make it compatible with Beaglebone AI.
I have managed to record audio from 4 of the 8 microphones that the CMB has(I just tap the mic to check if that one is working). However I want to record the 8 channels. Currently the working microphones are MIC1, MIC4, MIC5 and MIC8, although I would say there is much noise.
The CMB has 4 data output pins, so I suppose each one transmits 2 channels and therefore this is my dts file:
pcm5102a: pcm5102a {
#sound-dai-cells = <0>;
compatible = "ti,pcm5102a";
status = "okay";
};
sound {compatible = "simple-audio-card";
simple-audio-card,format = "i2s";
simple-audio-card,name = "PCM5102a";
simple-audio-card,bitclock-master = <&sound1_master>;
simple-audio-card,frame-master = <&sound1_master>;
simple-audio-card,bitclock-inversion;
simple-audio-card,cpu {
sound-dai = <&mcasp1>;
};
sound1_master: simple-audio-card,codec {
#sound-dai-cells = <0>;
sound-dai = <&pcm5102a>;
//clocks = <&mcasp1_fck>;
//clock-names = "mclk";
};
};
&mcasp1 {
#sound-dai-cells = <0>;
pinctrl-names = "default";
pinctrl-0 = <&mcasp1_pins>;
status = "okay";
op-mode = <0>; /* MCASP_IIS_MODE */
tdm-slots = <2>;
num-serializer = <4>;
serial-dir = < /* 1 TX 2 RX 0 unused */
2 2 0 0 0 0 0 0 0 0 2 2
>;
rx-num-evt = <4>;
tx-num-evt = <4>;
};
The serial-dir is that way because I use mcasp1_axr0, mcasp1_axr1, mcasp1_10 and mcasp1_axr11 because those are the ones available in Beaglbone AI. This is my configuration for the CMB:
uint8_t U1_PCM1864_CONFIG[][2] = {
{0x00, 0x00}, // Change to Page 0
{0x01, 0x40}, // PGA CH1_L to 32dB
{0x02, 0x40}, // PGA CH1_R to 32dB
{0x03, 0x40}, // PGA CH2_L to 32dB
{0x04, 0x40}, // PGA CH2_R to 32dB
{0x05, 0x86}, // Enable SMOOTH PGA Change; Independent Link PGA;
{0x06, 0x41}, // Polarity: Normal, Channel: VINL1[SE]
{0x07, 0x41}, // Polarity: Normal, Channel: VINR1[SE]
{0x08, 0x44}, // Polarity: Normal, Channel: VINL3[SE]
{0x09, 0x44}, // Polarity: Normal, Channel: VINR3[SE]
{0x0A, 0x00}, // Secondary ADC Input: No Selection
{0x0B, 0x44}, // RX WLEN: 24bit; TX WLEN: 24 bit; FMT: I2S format
{0x10, 0x03}, // GPIO0_FUNC - SCK Out; GPIO0_POL - Normal
{0x11, 0x50}, // GPIO3_FUNC - DOUT2; GPIO3_POL - Normal
{0x12, 0x04}, // GPIO0_DIR - GPIO0 - Output
{0x13, 0x40}, // GPIO3_DIR � GPIO3 - Output
{0x20, 0x11} // MST_MODE: Master; CLKDET_EN: Disable
};
uint8_t U2_PCM1864_CONFIG[][2] = {
{0x00, 0x00}, // Change to Page 0
{0x01, 0x40}, // PGA CH1_L to 32dB
{0x02, 0x40}, // PGA CH1_R to 32dB
{0x03, 0x40}, // PGA CH2_L to 32dB
{0x04, 0x40}, // PGA CH2_R to 32dB
{0x05, 0x86}, // Enable SMOOTH PGA Change; Independent Link PGA;
{0x06, 0x41}, // Polarity: Normal, Channel: VINL1[SE]
{0x07, 0x41}, // Polarity: Normal, Channel: VINR1[SE]
{0x08, 0x44}, // Polarity: Normal, Channel: VINL3[SE]
{0x09, 0x44}, // Polarity: Normal, Channel: VINR3[SE]
{0x0A, 0x00}, // Secondary ADC Input: No Selection
{0x0B, 0x44}, // RX WLEN: 24bit; TX WLEN: 24 bit; FMT: I2S format
{0x10, 0x00}, // GPIO0_FUNC – GPIO0; GPIO0_POL - Normal
{0x11, 0x50}, // GPIO3_FUNC - DOUT2; GPIO3_POL - Normal
{0x12, 0x00}, // GPIO0_DIR - GPIO0 - Input
{0x13, 0x40}, // GPIO3_DIR � GPIO3 - Output
{0x20, 0x01} // MST_MODE: Slave; CLKDET_EN: Enable
};
So what am I missing to get the 8 channels?
I just had a brief view of the chip you reference here, from what i see the PCM1864 has four channels and one data line (I2S) thus in order to use two PCM1864 you would need to specify two pins for it and set the Channel count to 4.
tdm-slots specifies how many TDM Slots (channels) are present, if you need 4, you should specify 4 here.
serial-dir specifies if a serial (aka data line for I2S) is a in r output. You have two PCM1864 thus i would assume that you need only two inputs (2 RX) instead of four.
Related
I have a FTDI USB3 development board and some FTDI provided code for accessing it. The code works fine for things like the Device number, VID/PID etc. but always returns zero for the 'ftHandle'. As the handle is required for driving the board, this is not helpful! Can anyone see why this should happen?
static FT_STATUS displayDevicesMethod2(void)
{
FT_STATUS ftStatus;
FT_HANDLE ftHandle = NULL;
// Get and display the list of devices connected
// First call FT_CreateDeviceInfoList to get the number of connected devices.
// Then either call FT_GetDeviceInfoList or FT_GetDeviceInfoDetail to display device
info.
// Device info: Flags (usb speed), device type (600 e.g.), device ID (vendor,
product),
handle for subsequent data access.
DWORD numDevs = 0;
ftStatus = FT_CreateDeviceInfoList(&numDevs); // Build a list and return number
connected.
if (FT_FAILED(ftStatus))
{
printf("Failed to create a device list, status = %d\n", ftStatus);
}
printf("Successfully created a device list.\n\tNumber of connected devices: %d\n",
numDevs);
// Method 2: using FT_GetDeviceInfoDetail
if (!FT_FAILED(ftStatus) && numDevs > 0)
{
ftHandle = NULL;
DWORD Flags = 0;
DWORD Type = 0;
DWORD ID = 0;
char SerialNumber[16] = { 0 };
char Description[32] = { 0 };
for(DWORD i = 0; i <numDevs; i++)
{
ftStatus = FT_GetDeviceInfoDetail(i, &Flags, &Type, &ID, NULL, SerialNumber,
Description, &ftHandle);
if (!FT_FAILED(ftStatus))
{
printf("Device[%d] (using FT_GetDeviceInfoDetail)\n", i);
printf("\tFlags: 0x%x %s | Type: %d | ID: 0x%08X | ftHandle=0x%p\n",
Flags,
Flags & FT_FLAGS_SUPERSPEED? "[USB 3]":
Flags & FT_FLAGS_HISPEED? "[USB 2]":
Flags & FT_FLAGS_OPENED? "[OPENED]": "",
Type,
ID,
ftHandle);
printf("\tSerialNumber=%s\n", SerialNumber);
printf("\tDescription=%s\n", Description);
}
}
}
return ftStatus;
}
This is indeed not super straight forward, but a short peek in the FTDI Knowledgebase yields:
This function builds a device information list and returns the number of D2XX devices connected to the system. The list contains information about both unopen and open devices.
A handle only exists for an opened device. Thus, I assume that your code does not already include that step. If so you need to open it first, e.g. using FT_Open. There are plenty of examples available. You can check their page or stackoverflow for a working example.
I made a library that encodes video in Azure using v3 API (.NET Core). I successfully made encoding up to FHD.
But then I tried to encode 4k UHD video (based on How to encode with a custom Transform and H264 Multiple Bitrate 4K articles).
So, here's my code to create this Transform:
private static async Task<Transform> Ensure4kTransformExistsAsync(IAzureMediaServicesClient client,
string resourceGroupName,
string accountName)
{
H264Layer CreateH264Layer(int bitrate, int width, int height)
{
return new H264Layer(
profile: H264VideoProfile.Auto,
level: "auto",
bitrate: bitrate, // Note that the units is in bits per second
maxBitrate: bitrate,
//bufferWindow: TimeSpan.FromSeconds(5), // this is the default
width: width.ToString(),
height: height.ToString(),
bFrames: 3,
referenceFrames: 3,
adaptiveBFrame: true,
frameRate: "0/1"
);
}
// Does a Transform already exist with the desired name? Assume that an existing Transform with the desired name
// also uses the same recipe or Preset for processing content.
Transform transform = await client.Transforms.GetAsync(resourceGroupName, accountName, TRANSFORM_NAME_H264_MULTIPLE_4K_S);
if (transform != null) return transform;
// Create a new Transform Outputs array - this defines the set of outputs for the Transform
TransformOutput[] outputs =
{
// Create a new TransformOutput with a custom Standard Encoder Preset
new TransformOutput(
new StandardEncoderPreset(
codecs: new Codec[]
{
// Add an AAC Audio layer for the audio encoding
new AacAudio(
channels: 2,
samplingRate: 48000,
bitrate: 128000,
profile: AacAudioProfile.AacLc
),
// Next, add a H264Video for the video encoding
new H264Video(
// Set the GOP interval to 2 seconds for both H264Layers
keyFrameInterval: TimeSpan.FromSeconds(2),
// Add H264Layers
layers: new[]
{
CreateH264Layer(20000000, 4096, 2304),
CreateH264Layer(18000000, 3840, 2160),
CreateH264Layer(16000000, 3840, 2160),
CreateH264Layer(14000000, 3840, 2160),
CreateH264Layer(12000000, 2560, 1440),
CreateH264Layer(10000000, 2560, 1440),
CreateH264Layer(8000000, 2560, 1440),
CreateH264Layer(6000000, 1920, 1080),
CreateH264Layer(4700000, 1920, 1080),
CreateH264Layer(3400000, 1280, 720),
CreateH264Layer(2250000, 960, 540),
CreateH264Layer(1000000, 640, 360)
}
),
// Also generate a set of PNG thumbnails
new PngImage(
start: "25%",
step: "25%",
range: "80%",
layers: new[]
{
new PngLayer(
"50%",
"50%"
)
}
)
},
// Specify the format for the output files - one for video+audio, and another for the thumbnails
formats: new Format[]
{
// Mux the H.264 video and AAC audio into MP4 files, using basename, label, bitrate and extension macros
// Note that since you have multiple H264Layers defined above, you have to use a macro that produces unique names per H264Layer
// Either {Label} or {Bitrate} should suffice
new Mp4Format(
"Video-{Basename}-{Label}-{Bitrate}{Extension}"
),
new PngFormat(
"Thumbnail-{Basename}-{Index}{Extension}"
)
}
),
OnErrorType.StopProcessingJob,
Priority.Normal
)
};
const string DESCRIPTION = "Multiple 4k";
// Create the custom Transform with the outputs defined above
transform = await client.Transforms.CreateOrUpdateAsync(resourceGroupName, accountName, TRANSFORM_NAME_H264_MULTIPLE_4K_S,
outputs,
DESCRIPTION);
return transform;
}
But the job ends up with the following error:
Job ended with error: Fatal service error, please contact support.
An error has occurred. Stage: ProcessSubtaskRequest. Code: System.Net.WebException.
And I did use S3 Media Reserved Unit for encoding. So, is there any way to make it work?
Posting back the solutions to this thread for completeness
There was a bug in the sample code (RunAsync() method), which resulted in Jobs using an incorrect output Asset. The bug has now been fixed.
There was a related bug in error handling that is being addressed
I'm at a loss with this — I have several personal projects in mind that essentially require that I "tap" into the audio stream: read the audio data, do some processing and modify the audio data before it is finally sent to the audio device.
One example of these personal projects is a software-based active crossover. If I have an audio device with 6 channels (i.e., 3 left + 3 right), then I can read the data, apply a LP filter (×2 – left + right), a BP filter, and a HP filter and output the streams through each of the six channels.
Notice that I know how to write a player application that does this — instead, I would want to do this so that any audio from any source (audio players, video players, youtube or any other source of audio being played by the web browser, etc.) is subject to this processing.
I've seen some of the examples (e.g., pcm_min.c from the alsa-project web site, play and record examples in the Linux Journal article by Jeff Tranter from Sep 2004) but I don't seem to have enough information to do something like what I describe above.
Any help or pointers will be appreciated.
You can implement your project as a LADSPA plugin, test it with audacity or any other program supporting LADSPA plugins, and when you like it, insert it into alsa/pulseaudio/jack playback chain.
"LADSPA" is a single header file defining a simple interface to write audio processing plugins. Each plugin has its input/output/control ports and run() function. The run() function is executed for each block of samples to do actual audio processing — apply "control" arguments to "input" buffers and write result to "output" buffers.
Example LADSPA stereo amplifier plugin (single control argument: "Amplification factor", two input ports, two output ports):
///gcc -shared -o /full/path/to/plugindir/amp_example.so amp_example.c
#include <stdlib.h>
#include "ladspa.h"
enum PORTS {
PORT_CAMP,
PORT_INPUT1,
PORT_INPUT2,
PORT_OUTPUT1,
PORT_OUTPUT2
};
typedef struct {
LADSPA_Data *c_amp;
LADSPA_Data *i_audio1;
LADSPA_Data *i_audio2;
LADSPA_Data *o_audio1;
LADSPA_Data *o_audio2;
} MyAmpData;
static LADSPA_Handle myamp_instantiate(const LADSPA_Descriptor *Descriptor, unsigned long SampleRate)
{
MyAmpData *data = (MyAmpData*)malloc(sizeof(MyAmpData));
data->c_amp = NULL;
data->i_audio1 = NULL;
data->i_audio2 = NULL;
data->o_audio1 = NULL;
data->o_audio2 = NULL;
return data;
}
static void myamp_connect_port(LADSPA_Handle Instance, unsigned long Port, LADSPA_Data *DataLocation)
{
MyAmpData *data = (MyAmpData*)Instance;
switch (Port)
{
case PORT_CAMP: data->c_amp = DataLocation; break;
case PORT_INPUT1: data->i_audio1 = DataLocation; break;
case PORT_INPUT2: data->i_audio2 = DataLocation; break;
case PORT_OUTPUT1: data->o_audio1 = DataLocation; break;
case PORT_OUTPUT2: data->o_audio2 = DataLocation; break;
}
}
static void myamp_run(LADSPA_Handle Instance, unsigned long SampleCount)
{
MyAmpData *data = (MyAmpData*)Instance;
double amp = *data->c_amp;
size_t i;
for (i = 0; i < SampleCount; i++)
{
data->o_audio1[i] = data->i_audio1[i]*amp;
data->o_audio2[i] = data->i_audio2[i]*amp;
}
}
static void myamp_cleanup(LADSPA_Handle Instance)
{
MyAmpData *data = (MyAmpData*)Instance;
free(data);
}
static LADSPA_Descriptor myampDescriptor = {
.UniqueID = 123, // for public release see http://ladspa.org/ladspa_sdk/unique_ids.html
.Label = "amp_example",
.Name = "My Amplify Plugin",
.Maker = "alsauser",
.Copyright = "WTFPL",
.PortCount = 5,
.PortDescriptors = (LADSPA_PortDescriptor[]){
LADSPA_PORT_INPUT | LADSPA_PORT_CONTROL,
LADSPA_PORT_INPUT | LADSPA_PORT_AUDIO,
LADSPA_PORT_INPUT | LADSPA_PORT_AUDIO,
LADSPA_PORT_OUTPUT | LADSPA_PORT_AUDIO,
LADSPA_PORT_OUTPUT | LADSPA_PORT_AUDIO
},
.PortNames = (const char * const[]){
"Amplification factor",
"Input left",
"Input right",
"Output left",
"Output right"
},
.PortRangeHints = (LADSPA_PortRangeHint[]){
{ /* PORT_CAMP */
LADSPA_HINT_BOUNDED_BELOW | LADSPA_HINT_BOUNDED_ABOVE | LADSPA_HINT_DEFAULT_1,
0, /* LowerBound*/
10 /* UpperBound */
},
{0, 0, 0}, /* PORT_INPUT1 */
{0, 0, 0}, /* PORT_INPUT2 */
{0, 0, 0}, /* PORT_OUTPUT1 */
{0, 0, 0} /* PORT_OUTPUT2 */
},
.instantiate = myamp_instantiate,
//.activate = myamp_activate,
.connect_port = myamp_connect_port,
.run = myamp_run,
//.deactivate = myamp_deactivate,
.cleanup = myamp_cleanup
};
// NULL-terminated list of plugins in this library
const LADSPA_Descriptor *ladspa_descriptor(unsigned long Index)
{
if (Index == 0)
return &myampDescriptor;
else
return NULL;
}
(if you prefer "short" 40-lines version see https://pastebin.com/unCnjYfD)
Add as many input/output channels as you need, implement your code in myamp_run() function. Build the plugin and set LADSPA_PATH environment variable to the directory where you've built it, so that other apps could find it:
export LADSPA_PATH=/usr/lib/ladspa:/full/path/to/plugindir
Test it in audacity or any other program supporting LADSPA plugins. To test it in terminal you can use applyplugin tool from "ladspa-sdk" package:
applyplugin input.wav output.wav /full/path/to/plugindir/amp_example.so amp_example 2
And if you like the result insert it into your default playback chain. For plain alsa you can use a config like (won't work for pulse/jack):
# ~/.asoundrc
pcm.myamp {
type plug
slave.pcm {
type ladspa
path "/usr/lib/ladspa" # required but ignored as `filename` is set
slave.pcm "sysdefault"
playback_plugins [{
filename "/full/path/to/plugindir/amp_example.so"
label "amp_example"
input.controls [ 2.0 ] # Amplification=2
}]
}
}
# to test it: aplay -Dmyamp input.wav
# to point "default" pcm to it uncomment next line:
#pcm.!default "myamp"
See also:
ladspa.h - answers to most technical questions are there in comments
LADSPA SDK overview
listplugins and analyseplugin tools from "ladspa-sdk" package
alsa plugins : "type ladspa" syntax, and alsa configuration file syntax
ladspa plugins usage examples
If you want to get your hands dirty with some code, you could check out some of these articles by Paul Davis (Paul Davis is a Linux audio guru). You'll have to combine the playback and capture examples to get live audio. Give it a shot, and if you have problems you can post a code-specific problem on SO.
Once you get the live audio working, you can implement an LP filter and go from there.
There are plenty of LADSPA and LV2 audio plugins that implement LP, HP and BP filters but I'm not sure if any are available for your particular channel configuration. It sounds like you want to roll your own anyway.
I need some help to find out why my rtc-ds1306 driver doesn't bind to the spi2.1 devices
I'm working on embedded linux (3.2.0) platform that I would like to use the spi to communicated with an RTC DS1306 and other spi devices. The platform come default with spi1.0 talk to a nor flash and I'm able to add and communicated with the spidev driver to the /dev/spi1.1 and /dev/spi2.0. The rtc-ds1305 driver is available under /sys/bus/spi/drivers/ (rtc-ds1306) but it's doesn't bind to any spi (ex:spi2.1). Spi1.1 and spi2.0 bind automatically. I don't see any error message at boot...
Can you tell me what is missing?
//---board-xxxx.c files----
static const struct flash_platform_data am335x_spi_flash = {
.type = "w25q64",
.name = "spi_flash",
};
/*
* SPI Flash works at 80Mhz however SPI Controller works at 48MHz.
* So setup Max speed to be less than that of Controller speed
*/
static struct spi_board_info am335x_spi0_slave_info[] = {
{
.modalias = "m25p80",
.platform_data = &am335x_spi_flash,
.irq = -1,
.max_speed_hz = 24000000,
.bus_num = 1,
.chip_select = 0,
},
//PH140107 add spidev driver for the spi0_cs1
{
.modalias = "spidev",
.max_speed_hz = 12000000,
.bus_num = 1,
.chip_select = 1,
.mode = SPI_MODE_0,
},
};
//PH140110 add this platform_data
static const struct ds1305_platform_data am335x_spi_rtc = {
.is_ds1306 = true,
.en_1hz = false,
};
/* PH140109
* SPI RTC DS1306 (use RTC-ds1305 driver) and add SPI1_CS0 incase need it for spi1_dsp
* So setup Max speed to be less than that of Controller speed
*/
static struct spi_board_info am335x_spi1_slave_info[] = {
{
.modalias = "rtc-ds1305",
.platform_data = &am335x_spi_rtc,
.max_speed_hz = 1000000,
.bus_num = 2,
.chip_select = 1,
.mode = SPI_CS_HIGH | SPI_CPOL | SPI_CPHA,
},
{
.modalias = "spidev",
.max_speed_hz = 48000000,
.bus_num = 2,
.chip_select = 0,
.mode = SPI_MODE_0,
},
};
edit: I can't find the rtc in the /dev/rtcX but in the /sys/bus/spi/devices I can see spi1.0,spi1.1,spi2.0 and spi2.1. Additionnaly in the /sys/bus/spi/drivers I can find m25p80, rtc-ds1305 and spidev. If I go in /sys/bus/spi/drivers/spidev I can see spi1.1 and spi2.0 (+ bind,uevent and unbind) but if I go to /sys/bus/spi/drivers/rtc-ds1305 there just bind,uevent and unbind.
I think I should see /dev/rtc0 and in /sys/bus/spi/drivers/rtc-ds1305 I should see spi2.1
I was working on a development board so the ds1306 wasn't populated so can't answering to the rtc-ds1305 driver sanity check. When connected to the real-board it appear under /dev/rtc0.
Problem solve!
I'm writing a simple program to set and clear a pin (the purpose is to use that pin as a custom spi_CS).
I'm able to export that pin (gpio1_17, port 9 pin 23 bb white) and to use that trough the filesystem but I have to drive it faster.
This is the code:
uint32_t *gpio;
int fd = open("/dev/mem", O_RDWR|O_SYNC);
if (fd < 0){
fprintf(stderr, "Unable to open port\n\r");
exit(fd);
}
gpio =(uint32_t *) mmap(NULL, getpagesize(), PROT_READ|PROT_WRITE, MAP_SHARED, fd, GPIO1_offset); // start of GPIOA
if(gpio == (void *) -1) {
printf("Memory map failed.\n");
exit(0);
} else {
printf("Memory mapped at address %p.\n", gpio);
}
printf("\nGPIO_OE:%X\n",gpio[GPIO_OE/4]);
gpio[GPIO_OE/4]=USR1;
printf("\nGPIO_OE:%X\n",gpio[GPIO_OE/4]);
printf("\nGPIO_CLEARDATAOUT:%X\n",gpio[GPIO_CLEARDATAOUT/4]);
gpio[GPIO_CLEARDATAOUT/4]=USR1;
printf("\nGPIO_CLEARDATAOUT:%X\n",gpio[GPIO_CLEARDATAOUT/4]);
sleep(1);
printf("\nGPIO_SETDATAOUT%X\n",gpio[GPIO_SETDATAOUT/4]);
gpio[GPIO_DATAOUT/4]=USR1;
printf("\nGPIO_SETDATAOUT%X\n",gpio[GPIO_SETDATAOUT/4]);
with
#define GPIO1_offset 0x4804c000
#define GPIO1_size 0x4804cfff-GPIO1_offset
#define GPIO_OE 0x134
#define GPIO_SETDATAOUT 0x194
#define GPIO_CLEARDATAOUT 0x190
#define GPIO_DATAOUT 0x13C
#define USR1 1<<17
I'm able to outputenable that pin, beacuse if I put it high before running the program, that ping goes low. But I cannot set and reset it. Any ideas?
Why are you directly modifying the registers? It is way easier to just use it as a linux GPIO:
#define GPIO_1_17 "49"
int gpio;
status_codes stat = STATUS_SUCCESS;
//Export our GPIOs for use
if((gpio = open("/sys/class/gpio/export", O_WRONLY)) >= 0) {
write(gpio, GPIO_1_17, strlen(GPIO_1_17));
close(gpio);
} else {
stat = STATUS_GPIO_ACCESS_FAILURE;
break;
}
//Set the direction and pull low
if((gpio = open("/sys/class/gpio/gpio" GPIO_1_17 "/direction", O_WRONLY)) >= 0) {
write(gpio, "out", 3); // Set out direction
close(gpio);
} else {
stat = STATUS_GPIO_ACCESS_FAILURE;
break;
}
if((gpio = open("/sys/class/gpio/gpio" GPIO_1_17 "/value", O_WRONLY)) >= 0) {
write(gpio, "0", 1); // Pull low
close(gpio);
} else {
stat = STATUS_GPIO_ACCESS_FAILURE;
break;
}
Then just make sure it is muxed as a gpio in your inits.
As far as the mmap method you have above you addressing looks correct. The addresses in the ref manual are byte addresses and you are using a 32-bit pointer so what you have there is correct. However, this line: gpio[GPIO_OE/4]=USR1 makes every pin on GPIO1 an output except for 17 which it makes an input (0 = output and 1 = input). You probably meant this: gpio[GPIO_OE/4] &= ~USR1
Also i believe you meant to have gpio[GPIO_SETDATAOUT/4]=USR1; instead of gpio[GPIO_DATAOUT/4]=USR1; They will both cause GPIO1_17 to be set; however, using what you have will also set all the other pins on GPIO1 to be 0.
I would definitely recommend using the designed kernel interfaces because mmap'ing stuff that is also controlled by the kernel can be asking for trouble.
Good Luck! : )
EDIT: My bad just realized you said why you were not driving it directly through the file system because you needed to drive it faster! You may also want to consider writing/modifying the SPI driver so that the stuff is being done in Kernel land if speed is what your after. The omap gpio interfaces are simple to use there as well, just request and set : ).