Reading data from sensor through NFC tag type 5 (NFC-V) - android-studio

I wanna make application in Android Studio for reading data from RGB sensor ISL29125 through NFC tag type 5 (ISO 15693). NFC tag is connected to sensor using I2C bus. I’m using address command for peripheral transaction according to datasheet of the NFC tag. My code for address mode peripheral transaction is following:
byte[] command = new byte[]{
(byte)0x20, //Request flags (Address mode ON)
(byte)0xA2, //PERIPHERAL TRANSACTION command
(byte)0x2B, //Manufacter code byte
(byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, //UID
(byte)0x00, //Parameter byte = Stop bit disabled
(byte)0x03, //NI2CWR (Number of bytes to be written) = 3
(byte)0x88, //I2C slave address (write)
(byte)0x09, //I2C slaves' register address
(byte)0x89, //I2C slave address (read)
(byte)0x01, //NI2CRD (Number of bytes to be read) = 1
};
System.arraycopy(id,0,command,3,8); //Change of UID to id of the tag
textView.setText("This is command you sent: "+(getHex(command)));
byte[] userdata= nfcvTag.transceive(command);
userdata = Arrays.copyOfRange(userdata, 0, 32);
viewResult.setText(getHex(userdata));
How the peripheral transaction command should look like according to datasheet
After sending this, I receive 32 times 0x00 byte, despite sensor is charged and the light goes to the sensor (RGB sensor). Anyway, there isn't mentioned where the slave address should be placed in the command in the datasheet of NFC tag (I incerted it almost at the end - bytes 88 09 and 89, but I'm not sure if it is right). Tag is MAX66242 and sensor is ISL29125 (https://www.intersil.com/content/dam/Intersil/documents/isl2/isl29125.pdf).
Reading sequence from sensor
I wanna to read data from register 0x09 (Green LOW).
My question is, does anybody know where the problem can be? And why do I reveice just 0x00?
I think, the problem might be with inicialization. How could I do it, if I would like to try it?
Thank you for any advice.

I Don't know whether your question is still valid or just pending...
Have you try to send a stop bit?
That would be:
(byte)0x10, //Parameter byte = Stop bit enabled
instead of "(byte)0x00, //Parameter byte = Stop bit disabled"
That might help to terminate a I2c sequence...

Related

Why does the device tree have to list interrupts for a UART device?

On the device file tree listed here we can see an UART device:
uarta: serial#70006000 {
compatible = "nvidia,tegra210-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006000 0x0 0x40>;
reg-shift = <2>;
interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&tegra_car TEGRA210_CLK_UARTA>;
clock-names = "serial";
resets = <&tegra_car 6>;
reset-names = "serial";
dmas = <&apbdma 8>, <&apbdma 8>;
dma-names = "rx", "tx";
status = "disabled";
};
The tegra210-uart driver is declared here
Why does uarta needs the interrupts section? Does it produce or receive interrupts? My guess is that it produces software interrupts: when it writes something to the serial, it triggers an interrupt. But why must this interrupt be listed in the device tree? And where is the code for this interrupt?
The second line of the above code block (reg = <0x0 0x70006000 0x0 0x40>) refers to an uart address. What is this address exactly? If I understand correctly, this is the address for a 8250_UART device but I'd like to know which pieces of this address do what, and how the kernel talks with this chip.
Driver documentation is available at https://www.kernel.org/doc/Documentation/devicetree/bindings/serial/nvidia%2Ctegra20-hsuart.txt
According to this forum https://forums.developer.nvidia.com/t/enabling-interrupts-for-uart-xavier-agx/166269
interrupts are used to prevent regular polling on UART.
Registering handler on this interrupt allows to only run code when effective data are received and available in DMA buffer, or when the complete outgoing message provided in DMA has been sent. Such a DMA usage example is available at: UART Tx mode with DMA enabled
Memory address 0x70006000 is the base location for UARTA controller registers (there are 4 UART [A,B,C,D] in Tegra 2) where to write commands or read status. These documents may help to understand:
https://docs.freebsd.org/en/articles/serial-uart/index.html
https://www.lammertbies.nl/comm/info/serial-uart
For instance, the Scratch Register (SCR +0x07) can be used to write and read back any byte as a way to test UART is present and controller operates as expected.
I recommend you to download reference documentation, chapter 22 for UART, available from https://developer.nvidia.com/embedded/tegra-2-reference

Is there possible to stream infinte data over SPI using DMA on STM32F3?

I'm developing a RF modem based on a new protocol, which has a feature of streaming 96 Bytes in one frame - but they are sent on and on, before communication ends. I plan using two 96 Bytes buffers in STM32 - in next lines I will explain why.
I want to send first 96 Byte frames by the USB-CDC to STM32 - then external modem chip will generate a "9600bps" clock and STM will have to write Payload bits by bits on specified output pin(at the trailing edge of the each clock pulse).
When STM32 will notice that it had sent a half of 96 Byte frame - that it sent to PC notification to send more data - PC will refill second 96 Byte buffer by USB-CDC immediately. When STM32 will end sending first buffer - immediately starts sending second buffer content. When it will send half of second buffer - as previous will ask PC for another 96Byte frame.
And that way all the time, before PC will sent command to stop tx.
This transfer mode - a serial, with using a "trigger clock".
Is this possible using DMA, and how could I set it?
I want to use DMA to have ability to use USB while already streaming data to the radio modem chip. Is this the right approach?
I'm working in project building an opensource radiocommunication system project with both packet and stream capatibilities & digital voice. I'm designing and electronics for PC radiomodem. Project is called M17 and is maintained by Wojtek SP5WWP.
Re. general architecture. Serial communication over USB ACM does not have to use buffer of the same size and be synchronized with the downstream communication over SPI. You could use buffers as big as practically possible so PC can send data in advance. This will reduce the chance of buffer underflow if PC does not provide data fast enough. Use a circular buffer and fill it when a packet arrives from USB.
DMA is the right approach. Although people often say that DMA is only necessary for high bandwidth operations, it may be actually easier to work with DMA than handling interrupts per every byte, even when you only handle 9600 bits per second.
DMA controller in STM32F3 has a Half-Transfer Complete (HTIF in DMA_ISR) bit that you can poll or make it generate and interrupt. In conjunction with the Transfer Complete status (TCIF) and the Circular bit (CIRC in DMA_CCR) you can organize a double-buffered data pipe so that transfers can overlap with whatever else the MCU is doing. The application will reload the first half of the DMA buffer on the HTIF event. When the TCIF event happens, it reloads the second half. It has to be done quickly, before the other half is also completed. However, you need a double buffered pipeline only when you need to constantly stream data, i.e. overall amount is larger than can the size of the DMA buffer.
Stopping a circular DMA may be tricky. I suppose both the STM32 and external chip know how many bytes to send. In that case, after this amount is received, disable the DMA.
It seems you need a slave SPI in STM32 as the external chip generates SPI clock.
DMA is not difficult to set up, however, it needs multiple things to work properly. I assume register-level programming, if you use some kind of framework, you'll need to find out how it implements these features. Enable clocks for SPI, GPIO port for SPI pins, and DMA, configure the pins as AF. Find the right DMA channel for the SPI peripheral. In case of SPI DMA you usually need two channels: TX and RX, but with the slave SPI, you may get away with one. Configure SPI, pay attention to clock polarity and phase, and set it to generate a DMA request for each TX and/or RX. Set the DMA CPAR channel register pointing to the SPI DR register in channel(s) and program all other DMA channel registers appropriately. Enable the DMA channel(s). Enable SPI in slave mode. When the SPI master clocks data on the MOSI/SCK pins, the DMA controller will put them in memory. When the buffer is half-full and full-full, the channel will set the HTIF and TCIF bits and generate and interrupt, if you told it to. Use these events to implement flow control.

How to read data from i2c using i2cget?

I'm new to embedded devices and am trying to understand how to use i2cget (or the entire I2C protocol really).
I'm using an accelerometer MMA8452, and the datasheet says the Slave Address is 0x1D (if my SAO=1, which I believe is referring to the I2C bus being on channel 1 on my raspberrypi v2).
From the command line, I enter
sudo i2cget -y 1 0X1d
It returns
0X00
I think that means I'm attached to the correct device.
So now, I'm trying to figure out how do I get actual data back from the accelerometer?
The i2c spec says
i2cget [-y] i2cbus chip-address [data-address [mode]]
So I have tried
sudo i2cget -y 1 0x1D 0x01
where 0x01 is the OUT_X_MSB. I'm not sure entirely what I'm expecting to get back, but I figured if I saw some data other than 0x00, I might be able to figure that out.
Am I using ic2get wrong? Is there a better way to learn and get data from i2c?
The datasheet for my accelerometer chip is at
http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Accelerometers/MMA8452Q.pdf
I am way too late but this might help other people. You might be getting the 0x00 output every time you use i2cget is because you might have forgotten to set some mode. For instance, I was working on pcf8583 which is a clock and calendar chip and can also be use as a counter.
My goal was to use this chip as a counter. It was connected to i2cbus1 with device address 0x51. So reading the data sheet, I found out that, the chip would work as a counter when the mode is set to 0x20 in the control register 0x00. The command I used for doing this:
i2cset 1 0x51 0x00 0x20
Now, I could read the counter pulses from a wind sensor with the command:
watch i2cget -y 1 0x51
watch is just a linux command that runs the specified command (i2cget) repeatedly every 2 seconds and displays the results on standard output.
From the datasheet its clear that the default value of Status Register Address 0x00 is 0x00, so you are doing fine I guess. See Table 11 Register Map Address in the datasheet.
You may try reading the device ID at Register Address 0x0D. You should get value 0x2A when you read this register(0x0D).

Verilog : Simple I2C read operation

I have written a program to read data from Microchip I2C EEPROM 24XX64. Initially I was able to get an acknowledge from the slave for command byte which indicates a READ operation. Perhaps, instead of data bits I was able to witness stL( write drive low signal) in model simulator. I would like to know the reason for this and what must be done to over come this signal.
To read from an I2C slave, you usually have to write the register address first. The process to read is:
START
Device Address + WRITE
Register Address (# of bytes depends on slave)
REPEATED START
Device Address + READ
Slave ACKs
Master Read bytes and NACKs when it's had enough
STOP
Did you do a write to set up a register address for the read?

c++ serial communication linux 9data bits

a bit exotic question :D
I'm programming c++ in ubuntu 10 and i need to code a mdb(multi drop bus) protocol which uses 9 data bits in serial communication (YES 9 data bits :D)
Some drivers do support 9 data bits on some uart chips, but mostly they do not.
To explain briefly:
mdb uses 8 data bits for data and 9th data bit for mode set.
So when master sends first BYTE it sets mode=9thbit to 1 which means that all devices on the bus are interrupted and are looking for this first byte that holds the address of a device.
If the listening device (one of many) finds its address in this first byte it knows that the following bytes will be data bytes for it. data bytes have bit 9 = mode bit set to 0
example in bits: 000001011 000000010 000000100 000000110 (1stbyte address and 3 data bytes)
The return situation from slave -> master mode bit is used for end of transmission.
So the master reads from serial so long until it finds a 9bit packet that has 9th bit = 1 usualy the last 9bit sequence is a chk byte + mode = 1
So finally my question:
I know how to user CMPAR flag in termios to use parity bit for mode bit eg. setting it to either MARK(1) or SPACE(0)
example FOR ALL that don't know how:
First check if this is defined if not probably no support in termios:
# define CMSPAR 010000000000 /* mark or space (stick) parity */
And the code for sending with mark or space parity eg. simulating 9th data bit
struct termios tio;
bzero(&tio, sizeof(tio));
tcgetattr(portFileDescriptor, &tio);
if(useMarkParity)
{
// Send with mark parity
tio.c_cflag |= PARENB | CMSPAR | PARODD;
tcsetattr(portFileDescriptor, TCSADRAIN, &tio);
}
else
{
// Send with space parity
tio.c_cflag |= PARENB | CMSPAR;
tio.c_cflag &= ~PARODD;
tcsetattr(portFileDescriptor, TCSADRAIN, &tio);
}
write(portFileDescriptor,DATA, DATALEN);
Now what i don't know HOW to set the parity checking on receive, i have tried almost all combinations and i cannot get that error parity byte sequence.
Can anyone help me how to set parity checking on receive that it does not ignore parity and does not strip bytes but it adds DEL before "bad" received byte:
As it says in the POSIX Serial help
INPCK and PARMRK If IGNPAR is enabled, a NUL character (000 octal) is
sent to your program before every character with a parity error.
Otherwise, a DEL (177 octal) and NUL character is sent along with
the bad character.
So how to correctly set PARMRK AND INPCK that it will detect mode bit = 1 as parity bit error and insert DEL 177 octal in the return stream.
Thank you :D
It sounds to me like you want to set space parity on the receiver and don't enable IGNPAR. That way when a byte with mark parity is received it should generate the parity error with the DEL.
I was having the same problem running in a Linux guest OS. Running the same program on another machine with Linux as the host OS works. I suspect that the virtual serial port does not pass on the parity error. See PARMRK termios behavior not working on Linux. It is still possible that the VM is not the problem because it was a completely different computer. I was able to get parity errors using Realterm in Windows (the host OS on the computer where Linux was the guest), however.
Also, note the code in n_tty.c shows it inserts '\377' '\0' rather than '\177' '\0'. This was also verified on the working configuration.

Resources