I2C not working between LPC824 and MMA8453Q - scope

For a project I am using an LPC824 MCU from NXP and I want to read out data from the accelerometer over I2C. This accelerometer is the MMA8453Q. Looking into the datasheet of the accelerometer I see the following:
From my understanding this means that you give a start signal, you write the device address and put write behind it, you get an acknowledgement and so on.
The address I have to write is the following:
the register I chose for now was 0x0D which is the "Who am I" register, this should always be read as 0x3A.
For people familiar with NXP examples I based myself on one of them. This is the code I have atm:
*txData = 0x0D;
SetupXferRecAndExecute(0x1C, txData, 1, rxData, 0);
SetupXferRecAndExecute(0x1C, txData, 0, rxData, 1);
and this is what it looks like on the scope
So as i can see I send the Device adress twice, first I write the register address and than i want to read it, but it gives 0x00 back.
Can someone pls help me? Thanks in advance!

My answer is based on those data sheets:
MMA8453Q: https://www.nxp.com/docs/en/data-sheet/MMA8453Q.pdf
LPC82X: https://www.nxp.com/docs/en/data-sheet/LPC82X.pdf
LPC82X is the Master and MMA8453Q is the Slave.
Depending on if you set the pin 7 of the slave to high (SA0 = 1) or low (SA0 = 0) your 7-bit salve address is either 0x1D or 0x1C. Together with the read (1) or write (0) bit the final 8-bit address is as written in the table you have posted in your comment.
For the further example, let's assume SA0 is set to low, means the full 8-bit slave address is 0x3B for reads and 0x3A for writes.
Let's assume you want to read out the single-byte register 0x17, which is the freefall/motion event source register.
Now, the message sequence (that you also showed in your post) is as follows:
Master sends: Start Signal
Master sends: Slave Address + Write-Bit (in our case 0x3A)
Slave sends: ACK (done by hardware)
Master sends: 0x17 (as we assumed you are interested in the content of that
register)
Slave sends: ACK (done by hardware)
Master sends: repeated Start Signal
Master sends: Slave Address + Read-Bit (in our case 0x3B)
Slave sends: ACK (done by hardware)
Slave sends: content of register 0x17
Master sends: NACK
Master sends: Stop Signal
If you want to read out registers like of the MMA8453Q (such as OUT_X_MSB/LSB) which contain two bytes your sequence would be the following:
Master sends: Start Signal
Master sends: Slave Address + Write-Bit (in our case 0x3A)
Slave sends: ACK (done by hardware)
Master sends: 0x01 (as we assumed you are interested in the content of that register)
Slave sends: ACK (done by hardware)
Master sends: repeated Start Signal
Master sends: Slave Address + Read-Bit (in our case 0x3B)
Slave sends: ACK (done by hardware)
Slave sends: MSB of register 0x01
Master sends: ACK
Slave sends: LSB of register 0x01
Master sends: NACK
Slave sends: Stop Signal
It is important for the Master to know in advance how many bytes will be received when requesting the data from a particular register of the slave. Only with that knowledge, the master can send the proper amount of ACK before sending the last NACK followed by the Stop Signal.

Related

Reading data from sensor through NFC tag type 5 (NFC-V)

I wanna make application in Android Studio for reading data from RGB sensor ISL29125 through NFC tag type 5 (ISO 15693). NFC tag is connected to sensor using I2C bus. I’m using address command for peripheral transaction according to datasheet of the NFC tag. My code for address mode peripheral transaction is following:
byte[] command = new byte[]{
(byte)0x20, //Request flags (Address mode ON)
(byte)0xA2, //PERIPHERAL TRANSACTION command
(byte)0x2B, //Manufacter code byte
(byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, //UID
(byte)0x00, //Parameter byte = Stop bit disabled
(byte)0x03, //NI2CWR (Number of bytes to be written) = 3
(byte)0x88, //I2C slave address (write)
(byte)0x09, //I2C slaves' register address
(byte)0x89, //I2C slave address (read)
(byte)0x01, //NI2CRD (Number of bytes to be read) = 1
};
System.arraycopy(id,0,command,3,8); //Change of UID to id of the tag
textView.setText("This is command you sent: "+(getHex(command)));
byte[] userdata= nfcvTag.transceive(command);
userdata = Arrays.copyOfRange(userdata, 0, 32);
viewResult.setText(getHex(userdata));
How the peripheral transaction command should look like according to datasheet
After sending this, I receive 32 times 0x00 byte, despite sensor is charged and the light goes to the sensor (RGB sensor). Anyway, there isn't mentioned where the slave address should be placed in the command in the datasheet of NFC tag (I incerted it almost at the end - bytes 88 09 and 89, but I'm not sure if it is right). Tag is MAX66242 and sensor is ISL29125 (https://www.intersil.com/content/dam/Intersil/documents/isl2/isl29125.pdf).
Reading sequence from sensor
I wanna to read data from register 0x09 (Green LOW).
My question is, does anybody know where the problem can be? And why do I reveice just 0x00?
I think, the problem might be with inicialization. How could I do it, if I would like to try it?
Thank you for any advice.
I Don't know whether your question is still valid or just pending...
Have you try to send a stop bit?
That would be:
(byte)0x10, //Parameter byte = Stop bit enabled
instead of "(byte)0x00, //Parameter byte = Stop bit disabled"
That might help to terminate a I2c sequence...

Why master is incrementing Address in AMBA AHB Burst transfer?

In AHB burst mode, master has to give only starting address and slave has to calculate the remaining address. But in the picture below (from AHB specification) address is incrementing at HAddress pin for every clock. Why? Am I wrong?
The master has to change HADDR for every transfer in a burst, not just give the starting address.
The benefit of the master providing addresses is that the slave need not have address incrementing logic inside it and can use the haddr signal on the bus. The benefit of a burst over a series of single transfers is simple: the slave can prepare for the next transfer while handling the current transfer since it "knows" the next address since addresses in a burst always increment by the same value. A series of single transfers could be a series of random addresses (the slave assumes the worst-case since it does not what is to appear on the bus) that might need be harder to handle by the slave.
I think the Haddress is not used by slave at every clock. The designers put those addresses for debbuging, and it's easier for slave to use the HBURST signal

BLE: Max number of packets in Connection interval

Is there a limit on maximum number of packets (LE_DATA) that could be send by either slave or master during one connection interval?
If this limit exists, are there any specific conditions for this limit (e.g. only x number of ATT data packets)?
Are master/slave required or allowed to impose such a limit by specification?
(I hope I'm not reviving a dead post. But I think the section 4.5.1 is better suited to answer this than 4.5.6.)
The spec doesn't define a limit of packets. It just states the following:
4.5.1 Connection Events - BLUETOOTH SPECIFICATION Version 4.2 [Vol 6, Part B]
(...)
The start of a connection event is called an anchor point. At the anchor point, a master shall start to transmit a Data Channel PDU to the slave. The start of connection events are spaced regularly with an interval of connInterval and shall not overlap. The master shall ensure that a connection event closes at least T_IFS before the anchor point of the next connection event. The slave listens for the packet sent by its master at the anchor point.
T_IFS is the "Inter Frame Space" time and shall be 150 μs. Simply put it's the job of the master to solve this problem. As far as I know iOS limits the packet number to 4 per connection event for instance. Android may have other hard coded limits depending on the OS version.
There is max data rate that can be achieved both on BT and BLE. You can tweak this data rate by changing MTU (maximum transmission unit - packet size) up to max MTU both ends of transmission can handle. But AFAIK there is no straight constraint on number of packets, besides physical ones imposed by the data rate.
You can find more in the spec
I could find the following in Bluetooth Spec v4.2:
4.5.6 Closing Connection Events
The MD bit of the Header of the Data Channel PDU is used to indicate
that the device has more data to send. If neither device has set the
MD bit in their packets, the packet from the slave closes the
connection event. If either or both of the devices have set the MD
bit, the master may continue the connection event by sending another
packet, and the slave should listen after sending its packet. If a
packet is not received from the slave by the master, the master will
close the connection event. If a packet is not received from the
master by the slave, the slave will close the connection event.
Two consecutive packets received with an invalid CRC match within a
connection event shall close the event.
This means both slave and masters have self-imposed limit on number of packets they want to transmit during a CI. When either party doesn't wish to send more data, they just set this bit to 0 and other one understands. This should usually be driven by number of pending packets on either side.
Since I was looking for logical limits due to spec or protocol, this probably answers my question.
Physical limits to number packets per CI would be governed by data rate, and as #morynicz mentioned, on MTU etc.
From my understanding, the limit is: min{max master event length, max slave event length, connection interval}.
To clarify, both the master and slave devices (specifically, the BLE stack thereof) typically have event length or "GAP event length" times. This time limit may be used to allow a central and/or advertiser and/or broadcaster to schedule the "phase offset" of more than one BLE radio activity, and/or limit the CPU usage of the BLE stack for application processing needs. E.g. a Nordic SoftDevice stack may have a default event length of 3.75ms that is indefinitely extendable (up to the connection interval) based on other demands on the SoftDevice's scheduler. In Android and iOS BLE implementations, this value may be opaque or not specified (e.g. Android may set this value to "0", which leaves the decision up to the controller implementation associated with the BLE chip of that device).
Note also that either the master or the slave may effectively "drop out" of a connection event earlier than these times if their TX/RX buffers are filled (e.g. Nordic SoftDevice stack may have a buffer size of 6 packets/frames). This may be implemented by not setting the MD bit (if TX buffer is exhausted) and/or "nacking" with the NESN bit (if RX buffer is full). However, while the master device can actually "drop out" by ending the connection event (not sending any more packets), the slave device must listen for master packets as long as at least one of master and slave have the MD bit set and the master continues to transmit packets (e.g. the slave could keep telling the master that it has no more data and also keep NACKing master packets because it has no more buffer space for the current connection event, but the master may keep trying to send for as long as it wants during the connection interval; not sure how/if the controller stack implements any "smarts" regarding this).
If there are no limits from either device in terms of stack-specified event length or buffer size, then presumably packets could be transmitted back and forth the entire connection interval (assuming at least one side had data to send and therefore set the MD bit). Just note for throughput calculation purposes that there is a T_IFS spacing (currently specified at 150us) between each packet and before the end of the connection interval.

Modbus simulator weird behavior

I am running the following modbus slave simulator http://www.modbusdriver.com/diagslave.html as well as the following modbus poller http://www.modbusdriver.com/modpoll.html. The weird thing is, I cna not however get them to discover each other. Here is the output from the slave simulator
Protocol configuration: Modbus RTU
Slave configuration: address = -1, master activity t/o = 3.00
Serial port configuration: /dev/ttyS0, 19200, 8, 1, even
Server started up successfully.
Listening to network (Ctrl-C to stop)
....................
and the following is the output from the modbus poller.
Protocol configuration: Modbus RTU
Slave configuration...: address = 1, start reference = 1, count = 1
Communication.........: /dev/ttyS0, 19200, 8, 1, even, t/o 1.00 s, poll rate 1000 ms
Data type.............: 16-bit register, output (holding) register table
-- Polling slave... (Ctrl-C to stop)
Reply time-out!
-- Polling slave... (Ctrl-C to stop)
As you can see, the modbus slave simulator as well as modbus poller both have the same values there should be able to find each other. However they can not which I find odd. Does anyone have any suggestions on what may be happening to cause this?
The time-out can be for various reasons such as
register address your polling for does not exist in the slave device(also check if the type of register you are polling for is right)
the connection has not been established between the slave & the master(this can be due to configuring the slave and the master serial port interfaces incorrectly).
Also check if the serial link has been physically wired correctly to the correct pins on both ends (slave and master).
-Check the timeout period set at the master. You may have set it to a very low value and hence the responses from the slaves are being missed.
Hope this helps.
I just recently had a case for Modbus
There are two examples in hand, you can refer to it
The way of socket
https://github.com/asgardpz/Modbus_Server
https://github.com/asgardpz/Modbus_Client
Use the hslcommunicationt component
https://github.com/asgardpz/Hsl
Readme.md has the screen of the program,
you can see if it is the sample you want

Verilog : Simple I2C read operation

I have written a program to read data from Microchip I2C EEPROM 24XX64. Initially I was able to get an acknowledge from the slave for command byte which indicates a READ operation. Perhaps, instead of data bits I was able to witness stL( write drive low signal) in model simulator. I would like to know the reason for this and what must be done to over come this signal.
To read from an I2C slave, you usually have to write the register address first. The process to read is:
START
Device Address + WRITE
Register Address (# of bytes depends on slave)
REPEATED START
Device Address + READ
Slave ACKs
Master Read bytes and NACKs when it's had enough
STOP
Did you do a write to set up a register address for the read?

Resources