STM32F4 ADXL345 I2C communication - sensors

I have started to work on I2C communication by examining adxl345 sensor. I wrote basic code to test if my code works or not. According to the ADXL345 technical documentation, the 0x00 register should return device id which is 0xE5.When I tried this register , the return value is 0. This application should be basic but I guess , I still missing something. Beside my experience, I also make a search at this community about the adxl345 problems,but I could not find answer. I would be very appreciated if you guide me in this problems. I attached my code.
void SysTick_Handler(void){
HAL_IncTick();
HAL_SYSTICK_IRQHandler();}
void SysClockEn();
/*System Configuration PA8-> I2C Clock , PC9-> I2C Data Lane*/
int main(){
SysClockEn();
HAL_Init();
/*------GPIO Configuration For I2C3------*/
__GPIOA_CLK_ENABLE();
GPIO_InitTypeDef *ptrB6,addrB6;
ptrB6 = &addrB6;
ptrB6->Alternate = GPIO_AF4_I2C3;
ptrB6->Pin = GPIO_PIN_8;
ptrB6->Pull = GPIO_NOPULL;
ptrB6->Speed =GPIO_SPEED_FREQ_HIGH;
ptrB6->Mode = GPIO_MODE_AF_OD;
HAL_GPIO_Init(GPIOA,ptrB6);
__GPIOC_CLK_ENABLE();
GPIO_InitTypeDef *ptrC,addrC;
ptrC = &addrC;
ptrC->Alternate =GPIO_AF4_I2C3;
ptrC->Mode =GPIO_MODE_AF_OD;
ptrC->Pin =GPIO_PIN_9;
ptrC->Pull =GPIO_NOPULL;
ptrC->Speed =GPIO_SPEED_FREQ_HIGH;
HAL_GPIO_Init(GPIOC,ptrC);
/*-----I2C Configurations-----*/
//__HAL_RCC_I2C3_CLK_ENABLE();
__I2C3_CLK_ENABLE();
I2C_HandleTypeDef *ptrI2C,addrI2C;
ptrI2C = &addrI2C;
ptrI2C->Instance = I2C3;
ptrI2C->Init.ClockSpeed = 100000; //100Khz
ptrI2C->Init.DutyCycle = I2C_DUTYCYCLE_2;
ptrI2C->Init.AddressingMode = I2C_ADDRESSINGMODE_7BIT;
ptrI2C->Mode =HAL_I2C_MODE_MASTER;
//ptrI2C->Init.GeneralCallMode =I2C_GENERALCALL_DISABLE;
//ptrI2C->Init.NoStretchMode=I2C_NOSTRETCH_DISABLE;
HAL_I2C_Init(ptrI2C);
__HAL_I2C_ENABLE(ptrI2C);
uint8_t data=0x00;
unsigned char buffer[2];
uint8_t *buf;
unsigned char pt;
uint32_t ptr;
uint8_t val
while(1){
val=HAL_I2C_IsDeviceReady(ptrI2C,0x1D,0xe5,1000);
pt=HAL_I2C_GetState(ptrI2C);
//HAL_I2C_Master_Transmit(ptrI2C,0x1d,0x00,1,0);
//HAL_I2C_Master_Receive(ptrI2C,0x1d,buffer,1,100);
//HAL_Delay(2);
HAL_I2C_Mem_Read(ptrI2C,SensAddr,0x00,1,buffer,2,1000);
ptr=HAL_I2C_GetError(ptrI2C);
}
}
void SysClockEn(){
__PWR_CLK_ENABLE();
}

The documentation of the sensor says:
the 7-bit I2C address for the device is 0x1D
So in your code your should write:
#define SensAddr (0x1D<<1)
...
HAL_I2C_Mem_Read(ptrI2C,SensAddr,0x00,1,buffer,2,1000);
...
This is because ST HAL considers the 7 bit address left shifted.
The documentation also says:
An alternate I2C address of 0x53 (followed by the R/W bit) can be chosen by grounding the SDO/ALT ADDRESS
If this is the case of your hardware, change the code to:
#define SensAddr (0x53<<1)
...
HAL_I2C_Mem_Read(ptrI2C,SensAddr,0x00,1,buffer,2,1000);
...

Related

STM32 HAL with SHT25 from sensirion over I2C,IIC

Does anyone, who uses STM32 HAL driver, got I2C communication with a Sensirion sensor like SHT25 working and can show me his snippets?
I got communication working using the Code examples from sensirion. (https://www.sensirion.com/fileadmin/user_upload/customers/sensirion/Dokumente/11_Sample_Codes_Software/Humidity_Sensors/Sensirion_Humidity_Sensors_SHT21_Sample_Code_V1.2.pdf)
I get an Acknowledge when i address the sensor, but when i want to read sensor data, i only get "11111111".
Working code for Sensirion SHTC1 on STM32 with HAL drivers:
#define SHTC1_I2C_ADDR 0xE0
#define TEMP_HUM_CMD_MEASURE_T_FIRST 0x7866
//Launch convert
uint8_t cmd[2];
cmd[0] = (uint8_t)(TEMP_HUM_CMD_MEASURE_T_FIRST >>> 8);
cmd[1] = (uint8_t)TEMP_HUM_CMD_MEASURE_T_FIRST;
HAL_I2C_Master_Transmit(&hi2c1, SHTC1_I2C_ADDR, cmd, 2, 100);
//Wait
HAL_Delay(15);
//Read values
uint8_t rawValues[6]; //T MSB, T LSB, T CRC, H MSB, H LSB, H CRC
HAL_I2C_Master_Receive(&hi2c1, SHTC1_I2C_ADDR, rawValues, 6, 100);
uint16_t rawTemp = (uint16_t)((((uint16_t)rawValues[0])<<8) | (uint16_t)rawValues[1]);
uint16_t rawHum = (uint16_t)((((uint16_t)rawValues[3])<<8) | (uint16_t)rawValues[4]);
float hum = (float)((float)100 * (float)rawHum / (float)65536);
float temp =(float)((float)-45 + (float)175 * (float)rawTemp / (float)65536);
Use HAL_I2C_Mem_Write() and HAL_I2C_Mem_Read() HAL APIs for writing and reading data through I2C interface from the sensor. What data to write/read and from which memory location to write/read you have to find from sensor data sheet.

why repeated start based i2c operation are not supported in linux?

I want to read from i2c slave which need multi start operation to read its register values.
As up-to some extent I have traced I2C driver in Linux kernel 3.18.21, I found it do not support multi start operation and I have no way to read from this I2C slave (Power Over Ethernet manager PD69104B1).
I am still finding the way I can extended driver if needed for this i2c slave or anything else needed.
I use i2c-tools 3.2.1.
I try to
$ i2cdump -y 0 0x20
but I can see same values which means it read first register every time.
$ i2cget -y 0 0x20 0x12
or any other register address returns the same value as a first register.
This slave support two read operation:
byte read - write address get its value but this need multi start
block read - start reading and i2c slave will give register values in sequence like 0x00 0x01.... (first register, second , third, fourth....etc)
I try all possible ways:
i2c_smbus_access()
i2c_smbus_write_byte()
i2c_smbus_read_block_data()
write()
read()
but then most of the time i2c bus goes into timeout error and hang situations.
Anyone has idea how to achieve this in Linux?
Update0:
This I2C slaves need unique Read cycles:
Change of Direction: S Addr Wr [A] RegAddress [A] S Addr Rd [A] [RegValue] P
Short Read: S Addr Rd [A] [RegValue] P
here last value returned from i2c slave do not expect ACK.
I tried to use I2C_M_NO_RD_ACK but with not much help. I read some value and then get FF.
This POE I2C slave have i2c time out of 14ms on SCL which is bit of doubt. This looks like i2c non standard as i2c can work on 0HZ i.e. SCL can be stretched by master as long as it want. Linux is definitely not real time OS so achieving this time out can not be guaranteed and i2c slave SCL timeout reset may happen. This is what my current conclusion is!
I2C Message notation used is from:
https://www.kernel.org/doc/Documentation/i2c/i2c-protocol
why repeated start based i2c operation are not supported in linux?
As a matter of fact, they are supported.
If you are looking for a way to perform repeated start condition in user-space, you probably need to do ioctl() with I2C_RDWR request, like it's described here (see last code snippet in original question) and here (code in question).
Below described the way to perform repeated start in kernel-space.
In Linux kernel I2C read operations with repeated start condition are performed by default for combined (write/read) messages.
Here is an example how to perform combined I2C transfer:
/**
* Read set of registers via I2C using "repeated start" condition.
*
* Two I2C messages are being sent by this function:
* 1. I2C write operation (write register address) with no STOP bit in the end
* 2. I2C read operation
*
* #client: I2C client structure
* #reg: register address (subaddress)
* #len: bytes count to read
* #buf: buffer which will contain read data
*
* Returns 0 on success or negative value on error.
*/
static int i2c_read_regs(struct i2c_client *client, u8 reg, u8 len, u8 *buf)
{
int ret;
struct i2c_msg msg[2] = {
{
.addr = client->addr,
.len = 1,
.buf = &reg,
},
{
.addr = client->addr,
.flags = I2C_M_RD,
.len = len,
.buf = buf,
}
};
ret = i2c_transfer(client->adapter, msg, 2);
if (ret < 0) {
dev_err(&client->dev, "I2C read failed\n");
return ret;
}
return 0;
}
To read just 1 byte (single register value) you can use next helper function:
/**
* Read one register via I2C using "repeated start" condition.
*
* #client: I2C client structure
* #reg: register address (subaddress)
* #val: variable to store read value
*
* Returns 0 on success or negative value on error.
*/
static int i2c_read_reg(struct i2c_client *client, u8 reg, u8 *val)
{
return i2c_read_regs(client, reg, 1, val);
}
Below is the illustration for i2c_read_regs(client, reg, 1, val) call:
device address is client->addr
register address is reg
1 means that we want to read 1 byte of data (pink rectangle on picture)
read data will reside at val
NOTE: If your I2C controller (or its driver) doesn't support repeated starts in combined messages, you still can use bit-bang implementation of I2C, which is i2c-gpio driver.
If nothing works, you can try next as a last resort. For some reason I can't quite remember, in order to make repeated start work I was needed to add I2C_M_NOSTART to .flags of first message, like this:
struct i2c_msg msg[2] = {
{
.addr = client->addr,
.flags = I2C_M_NOSTART,
.len = 1,
.buf = &reg,
},
{
.addr = client->addr,
.flags = I2C_M_RD,
.len = len,
.buf = buf,
}
};
As noted in Documentation/i2c/i2c-protocol:
If you set the I2C_M_NOSTART variable for the first partial message,
we do not generate Addr, but we do generate the startbit S.
References:
[1] I2C on STLinux

Remove input driver bound to the HID interface

I'm playing with some driver code for a special kind of keyboard. And this keyboard does have special modes. According to the specification those modes could only be enabled by sending and getting feature reports.
I'm using 'hid.c' file and user mode to send HID reports. But both 'hid_read' and 'hid_get_feature_report' failed with error number -1.
I already tried detaching keyboard from kernel drivers using libusb, but when I do that, 'hid_open' fails. I guess this is due to that HID interface already using by 'input' or some driver by the kernel. So I may not need to unbind kernel hidraw driver, instead I should try unbinding the keyboard ('input') driver top of 'hidraw' driver. Am I correct?
And any idea how I could do that? And how to find what are drivers using which drivers and which low level driver bind to which driver?
I found the answer to this myself.
The answer is to dig this project and find it's hid implementation on libusb.
Or you could directly receive the report.
int HID_API_EXPORT hid_get_feature_report(hid_device *dev, unsigned char *data, size_t length)
{
int res = -1;
int skipped_report_id = 0;
int report_number = data[0];
if (report_number == 0x0) {
/* Offset the return buffer by 1, so that the report ID
will remain in byte 0. */
data++;
length--;
skipped_report_id = 1;
}
res = libusb_control_transfer(dev->device_handle,
LIBUSB_REQUEST_TYPE_CLASS|LIBUSB_RECIPIENT_INTERFACE|LIBUSB_ENDPOINT_IN,
0x01/*HID get_report*/,
(3/*HID feature*/ << 8) | report_number,
dev->interface,
(unsigned char *)data, length,
1000/*timeout millis*/);
if (res < 0)
return -1;
if (skipped_report_id)
res++;
return res;
}
I'm sorry I can't post my actual code due to some legal reasons. However the above code is from hidapi implementation.
So even you work with an old kernel , you still have the chance to make your driver working.
This answers to this question too: https://stackoverflow.com/questions/30565999/kernel-version-2-6-32-does-not-support-hidiocgfeature

AVR UART - Java Android bluetooth communication

I have bluetooth module connected to AVR (Atmega32A) via UART. Some bytes that are transmit from bluetooth module to AVR are not properly recived.
For example the bytes that are properly transmit/recived (UTF-8):
Bluetooth module transmit byte X->recived byte X'
'w'->'w'
's'->'s'
'z'->'z'
'm'->'m'
bytes recived not properly:
'q'->'y'
'p'->'~'
'1'->'9'
Bluetooth connection settings:
Bps/Par/Bits: 115200 8N1
init UART:
#define F_CLK 16000000
#define BAUD 115200
uint16_t ubrr_value = (uint16_t) (((F_CLK)/(16 * BAUD)) - 1);
UBRRL = ubrr_value;
UBRRH = (ubrr_value>>8);
// 8 bit frame, async mode
UCSRC=(1<<URSEL) | (3<<UCSZ0);
//recive and transmit mode
UCSRB = (1<<TXEN) | (1 << RXEN);
transmit/recive byte by uart:
char USART_ReceiveByte()
{
while(!(UCSRA & (1<<RXC)));
return UDR;
}
void uart_sendRS(char VALUE)
{
while(!(UCSRA & (1<<UDRE)));
UDR = VALUE;
}
main loop:
while(1)
{
recivedByte = USART_ReceiveByte();
uart_sendRS(recivedByte);
}
i would be so glad to know why it does not work properly
EDIT: if i change the order there is result:
'y'->'y'
'~'->'~'
'9'->'9'
EDIT2: probably there is something wrong with setting UBRRL and UBRRH (ubrr_value = 7 in this case), does someone can confirm if it is proper and if the microcontroller can handle such a high BAUD?
#define F_CLK 16000000
#define BAUD 115200
uint16_t ubrr_value = (uint16_t) (((F_CLK)/(16 * BAUD)) - 1);
UBRRL = ubrr_value;
UBRRH = (ubrr_value>>8);
The problem here is that you are not initialising the UART properly. You need to set the U2X bit in UCSRA if you wish to use the baud rate as you wish it configured. If you are using avr-libc you may use the following code to properly compute the BAUD rate.
void uart0_init(void) {
# define BAUD 115200
# include <util/setbaud.h>
UBRRH = UBRRH_VALUE;
UBRRL = UBRRL_VALUE;
# if USE_2X
UCSRA |= _BV(U2X);
# else
UCSRA &= ~_BV(U2X);
# endif
# undef BAUD
/* other uart stuff you may need */
}
If you look at the datasheet for your microcontroller, section 20.12, you will find a table with this information precomputed for you. Cheers.

Sending UDP packets from the Linux Kernel

Even if a similar topic already exists, I noticed that it dates back two years, thus I guess it's more appropriate to open a fresh one...
I'm trying to figure out how to send UDP packets from the Linux Kernel (3.3.4), in order to monitor the behavior of the random number generator (/drivers/char/random.c). So far, I've managed to monitor a few things owing to the sock_create and sock_sendmsg functions. You can find the typical piece of code I use at the end of this message. (You might also want to download the complete modified random.c file here.)
By inserting this code inside the appropriate random.c functions, I'm able to send a UDP packet for each access to /dev/random and /dev/urandom, and each keyboard/mouse events used by the random number generator to harvest entropy. However it doesn't work at all when I try to monitor the disk events: it generates a kernel panic during boot.
Consequently, here's my main question: Have you any idea why my code causes so much trouble when inserted in the disk events function? (add_disk_randomness)
Alternatively, I've read about the netpoll API, which is supposed to handle this kind of UDP-in-kernel problems. Unfortunately I haven't found any relevant documentation apart from an quite interesting but outdated Red Hat presentation from 2005. Do you think I should rather use this API? If yes, have you got any example?
Any help would be appreciated.
Thanks in advance.
PS: It's my first question here, so please don't hesitate to tell me if I'm doing something wrong, I'll keep it in mind for future :)
#include <linux/net.h>
#include <linux/in.h>
#include <linux/netpoll.h>
#define MESSAGE_SIZE 1024
#define INADDR_SEND ((unsigned long int)0x0a00020f) //10.0.2.15
static bool sock_init;
static struct socket *sock;
static struct sockaddr_in sin;
static struct msghdr msg;
static struct iovec iov;
[...]
int error, len;
mm_segment_t old_fs;
char message[MESSAGE_SIZE];
if (sock_init == false)
{
/* Creating socket */
error = sock_create(AF_INET, SOCK_DGRAM, IPPROTO_UDP, &sock);
if (error<0)
printk(KERN_DEBUG "Can't create socket. Error %d\n",error);
/* Connecting the socket */
sin.sin_family = AF_INET;
sin.sin_port = htons(1764);
sin.sin_addr.s_addr = htonl(INADDR_SEND);
error = sock->ops->connect(sock, (struct sockaddr *)&sin, sizeof(struct sockaddr), 0);
if (error<0)
printk(KERN_DEBUG "Can't connect socket. Error %d\n",error);
/* Preparing message header */
msg.msg_flags = 0;
msg.msg_name = &sin;
msg.msg_namelen = sizeof(struct sockaddr_in);
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_iov = &iov;
msg.msg_control = NULL;
sock_init = true;
}
/* Sending a message */
sprintf(message,"EXTRACT / Time: %llu / InputPool: %4d / BlockingPool: %4d / NonblockingPool: %4d / Request: %4d\n",
get_cycles(),
input_pool.entropy_count,
blocking_pool.entropy_count,
nonblocking_pool.entropy_count,
nbytes*8);
iov.iov_base = message;
len = strlen(message);
iov.iov_len = len;
msg.msg_iovlen = len;
old_fs = get_fs();
set_fs(KERNEL_DS);
error = sock_sendmsg(sock,&msg,len);
set_fs(old_fs);
I solved my problem a few months ago. Here's the solution I used.
The standard packet-sending API (sock_create, connect, ...) cannot be used in a few contexts (interruptions). Using it in the wrong place leads to a KP.
The netpoll API is more "low-level" and works in every context. However, there are several conditions :
Ethernet devices
IP network
UDP only (no TCP)
Different computers for sending and receiving packets (You can't send to yourself.)
Make sure to respect them, because you won't get any error message if there's a problem. It will just silently fail :) Here's a bit of code.
Declaration
#include <linux/netpoll.h>
#define MESSAGE_SIZE 1024
#define INADDR_LOCAL ((unsigned long int)0xc0a80a54) //192.168.10.84
#define INADDR_SEND ((unsigned long int)0xc0a80a55) //192.168.10.85
static struct netpoll* np = NULL;
static struct netpoll np_t;
Initialization
np_t.name = "LRNG";
strlcpy(np_t.dev_name, "eth0", IFNAMSIZ);
np_t.local_ip = htonl(INADDR_LOCAL);
np_t.remote_ip = htonl(INADDR_SEND);
np_t.local_port = 6665;
np_t.remote_port = 6666;
memset(np_t.remote_mac, 0xff, ETH_ALEN);
netpoll_print_options(&np_t);
netpoll_setup(&np_t);
np = &np_t;
Use
char message[MESSAGE_SIZE];
sprintf(message,"%d\n",42);
int len = strlen(message);
netpoll_send_udp(np,message,len);
Hope it can help someone.
Panic during boot might be caused by you trying to use something which wasn't initialized yet. Looking at stack trace might help figuring out what actually happened.
As for you problem, I think you are trying to do a simple thing, so why not stick with simple tools? ;) printks might be bad idea indeed, but give trace_printk a go. trace_printk is part of Ftrace infrastructure.
Section Using trace_printk() in following article should teach you everything you need to know:
http://lwn.net/Articles/365835/

Resources