MFRC522 reading/wrting old RFID tags - rfid

I can successfully read and write the (1k) tags that came with the reader, but the tags I need to use have just 4 bytes per block rather than the 16 for the 1k tags, and presumably no security. The data sheet for the reader chip is rather useless (I am going cross eyed looking at it) and the available code for using it does not suggest what settings need changing to read these old tags. Question. Should I be able to read (I am guessing) Type 2 tags with this reader, and any body got any documentation that might help me make the NXP chip (Chinese clone) read 4 bytes instead of 16?
thanks in advance.
P

So no, the tag reader module only reads/writes a 16 byte "block". With block 0 being immutable. The tags I have are "striped" with each 4 bytes being repeated 4 times (with an offset of 1). There you go.

Related

NVME sensor reading error with more than 1 NVME configured in entity manager

Hi, I'm trying to read NVMe sensors using NVMeSensor from dbus-sensors. I have configured for 4 Nvmes in my *.json file of entity-manager (EM) config and it logged "Sensor x error reading" for all. I put the config in the common EM config for the board together with Fan sensors, ADCsensors and others, refering this (https://github.com/ibm-openbmc/entity-manager/blob/14a7bc9303d747dbc20cb702083e7af0a3cf0496/configurations/NVME%20P4000.json#L10-L41). In this case, I see that boost::asio::async_read at https://github.com/openbmc/dbus-sensors/blob/ce6bcdfc28f60173093087050a43adbc586fd6fa/src/NVMeBasicContext.cpp#L290 returns the response of size 0. But the resp from https://github.com/openbmc/dbus-sensors/blob/ce6bcdfc28f60173093087050a43adbc586fd6fa/src/NVMeBasicContext.cpp#L83 has size of 6 and valid value.
Howerver, when I config only 1 nvme in EM, it returns value normally on dbus.
I wonder if NVMeSensor only support nvme with a fru and we have to have a single json file for each just like NVMEP4000.json.
What should I do when I want to config all the nvme inside the EM config of the board?. Since I can't find any reference.
I have not found the meaning of "Address" in NVME1000 config since it will use 0x6a anyway, at least to what I have seen. Can you tell me what is it for?
I'm really new to OpenBMC and don't get much of the mechanism of the code, please help to remedy my understanding if it's not correct. Any advice from you will be appreciated a lot.
Thank you.
Edited
I realize that when 1 of the NVME is not present, all of them will fails. I think the failed one affects the stream for reading or the response stream (respStream) although each nvme has a separate request stream (reqStream). I don't know why they interfere each other, but I see that when the resp size from smbus is < 0, they still write them to the stream without resizing the resp vector like when the size is normal, I add the resp.resize(len) here (https://github.com/openbmc/dbus-sensors/blob/ce6bcdfc28f60173093087050a43adbc586fd6fa/src/NVMeBasicContext.cpp#L153), it works, and we can do hot plug. Is that because I did not use FRU probe for the NVMEs....?
I wonder if NVMeSensor only support nvme with a fru and we have to have a single json file for each just like NVMEP4000.json.
The "Probe" field in entity-manager configuration json is used for probe rules for the device. FRU is just one way. For example, if you know the exact i2c bus and address, you can use something like
xyz.openbmc_project.Inventory.Decorator.I2CDevice({'Bus': 4, 'Address': 60})
^ ^ ^
DBus Interface | Value
Property
And "Probe" can be an array with AND OR operators. Like this example.
What should I do when I want to config all the nvme inside the EM config of the board?.
I think adding all 4 NVME1000 blocks to your board json will do this, as long as they have different names and bus-address configuration.
I have not found the meaning of "Address" in NVME1000 config since it will use 0x6a anyway, at least to what I have seen. Can you tell me what is it for?
On Intel P4000 series SSDs, 0x53 (What in nvme_p4000.json) is the 7bit address of the FRU eeprom, while 0x6a is the 7bit address for NVM Express Basic Management Command (Appendix A of NVMe-MI 1.2b Specification). These addresses are only documented in the product spec that not generally available :(
Putting all nvme configs inside the baseboard EM config is OK. There are hotplug issues with dbus-sensors nvmesensor, so when one of my configured nvme is not present, all the others will fail. I only plugged 1 nvme to one of the 4 slots so it causes the problem. I was told they are checking on this, but I'm doing the trick I put in the Edit section of my question.
They hardcode 0x6A for i2c address in nvmesensor code, the reason is as #KagurazakaKotori said.

Node.js readUIntBE arbitrary size restriction?

Background
I am reading buffers using the Node.js buffer native API. This API has two functions called readUIntBE and readUIntLE for Big Endian and Little Endian respectively.
https://nodejs.org/api/buffer.html#buffer_buf_readuintbe_offset_bytelength_noassert
Problem
By reading the docs, I stumbled upon the following lines:
byteLength Number of bytes to read. Must satisfy: 0 < byteLength <= 6.
If I understand correctly, this means that I can only read 6 bytes at a time using this function, which makes it useless for my use case, as I need to read a timestamp comprised of 8 bytes.
Questions
Is this a documentation typo?
If not, what is the reason for such an arbitrary limitation?
How do I read 8 bytes in a row ( or how do I read sequences greater than 6 bytes? )
Answer
After asking in the official Node.js repo, I got the following response from one of the members:
No it is not a typo
The byteLength corresponds to e.g. 8bit, 16bit, 24bit, 32bit, 40bit and 48bit. More is not possible since JS numbers are only safe up to Number.MAX_SAFE_INTEGER.
If you want to read 8 bytes, you can read multiple entries by adding the offset.
Source: https://github.com/nodejs/node/issues/20249#issuecomment-383899009

Packing 20 bytes chunk via BLE

I've never worked with bluetooth before. I have to sends data via BLE and I've found the limit of 20 bytes per chunk.
The sender is an Arduino and the receiver could be both an Android or a Node.js app on a pc.
I have to send 9 values, stored in float values, so 4 bytes * 9 = 36 bytes. I need 2 chunks for all my data via BLE. The receiving part needs both chunks to process them. If some data are lost, I don't care.
I'm not expert in network protocols and I think I have to give each message an incremental timestamp so that the receiver can glue the two chunks with the same timestamp or discard the last one if the new timestamp is higher. But I'm not sure how to do a checksum, if I really need it or not, if I really have to care about it, or if - for a simple beta version of my system - I can ignore all those problems..
Does anyone can give me some advice? Like examples of similar situations handled with BLE communication?
You can get around the size limitation using the "Read Blob Request" of ATT. It allows you to read an attribute and also give an offset. So, you can use it to read the attribute with an offset of 0, if there's more than ATT_MTU bytes than you can request again with the offset at ATT_MTU*1, if there's still more ATT_MTU*2, etc... (You can read it in 3.4.4.5 of the Bluetooth v4.1 specifications; it's in the 4.0 spec too but I don't have that in front of me right now)
If the value changes between request, I'm not sure how you could go about detecting such a change. You could have the attribute send notifications when there's a change to interrupt the process in case the value changes in the middle of reading it.

File handling in tinyos or tossim

I need to read data from a text file in a tinyos file (nesc file). I searched lot on Internet but couldn't find a way.
Is there any way?
I don't know about TOSSIM, but using a real sensor board its possible to do so.
What you could do, is to write a program using Java, C#, etc that reads the file and passes the acquired data to the serial/usb port as a SERIAL PACKET. But you are limited to maximum of 255 byte for each packet.
So you should make a simple protocol that takes care of data chunks.
Of course you should know that how you can create a serial packet to be readable by sensor boards.
For that you need to read the TEP#113. But short story, a serial packet is consisted of:
HEADER + CONTENT + FOOTER
header contains protocol byte, destination and source address etc...
content is your message_t struct
footer has CRC and some other stuff
You have to take care of CRC calculation and also escaping start/end delimeters (I believe byte 126 or 127 is the delimiter, I mean indicator of starting and ending a packet).

To pad or not to pad - creating a communication protocol

I am creating a protocol to have two applications talk over a TCP/IP stream and am figuring out how to design a header for my messages. Using the TCP header as an initial guide, I am wondering if I will need padding. I understand that when we're dealing with a cache, we want to make sure that data being stored fits in a row of cache so that when it is retrieved it is done so efficiently. However, I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
For example: I want to send over a message header consisting of a 3 byte field followed by a 1 byte padding field for 32 bit alignment. Then I will send over the message data.
In this case, the receiver will just take 3 bytes from the stream and throw away the padding byte. And then start reading message data. As I see it, he will not be storing the 3 bytes and the message data the way he wants. The whole point of byte alignment is so that it will be retrieved in an efficient manner. But if the retriever doesn't care about the padding how will it be retrieved efficiently?
Without the padding, the retriever just takes the 3 header bytes from the stream and then takes the data bytes. Since the retriever stores these bytes however he wants, how does it matter whether or not the padding is done?
Maybe I'm missing the point of padding.
It's slightly hard to extract a question from this post, but with what I've said you guys can probably point out my misconceptions.
Please let me know what you guys think.
Thanks,
jbu
If word alignment of the message body is of some use, then by all means, pad the message to avoid other contortions. The padding will be of benefit if most of the message is processed as machine words with decent intensity.
If the message is a stream of bytes, for instance xml, then padding won't do you a whole heck of a lot of good.
As far as actually designing a wire protocol, you should probably consider using a plain text protocol with compression (including the header), which will probably use less bandwidth than any hand-designed binary protocol you could possibly invent.
I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
If I'm a receiver, I might pass a buffer (i.e. an array of bytes) to the protocol driver (i.e. the TCP stack) and say, "give this back to me when there's data in it".
What I (the application) get back, then, is an array of bytes which contains the data. Using C-style tricks like "casting" and so on I can treat portions of this array as if it were words and double-words (not just bytes) ... provided that they're suitably aligned (which is where padding may be required).
Here's an example of a statement which reads a DWORD from an offset in a byte buffer:
DWORD getDword(const byte* buffer)
{
//we want the DWORD which starts at byte-offset 8
buffer += 8;
//dereference as if it were pointing to a DWORD
//(this would fail on some machines if the pointer
//weren't pointing to a DWORD-aligned boundary)
return *((DWORD*)buffer);
}
Here's the corresponding function in Intel assembly; note that it's a single opcode i.e. quite an efficient way to access the data, more efficient that reading and accumulating separate bytes:
mov eax,DWORD PTR [esi+8]
Oner reason to consider padding is if you plan to extend your protocol over time. Some of the padding can be intentionally set aside for future assignment.
Another reason to consider padding is to save a couple of bits on length fields. I.e. always a multiple of 4, or 8 saves 2 or 3 bits off the length field.
One other good reason that TCP has padding (which probably does not apply to you) is it allows dedicated network processing hardware to easily separate the data from the header. As the data always starts on a 32 bit boundary, it's easier to separate the header from the data when the packet gets routed.
If you have a 3 byte header and align it to 4 bytes, then designate the unused byte as 'reserved for future use' and require the bits to be zero (rejecting messages where they are not as malformed). That leaves you some extensibility. Or you might decide to use the byte as a version number - initially zero, and then incrementing it if (when) you make incompatible changes to the protocol. Don't let the value be 'undefined' and "don't care"; you'll never be able to use it if you start out that way.

Resources