I am working with an I2C-Device under Linux and tried to use the device interface like described under folowing Link.
So if we assume following code:
char outbuf[SIZE] = { 'e', 'b' };
struct i2c_rdwr_ioctl_data msgset;
struct i2c_msg msg[1];
msg[0].addr = 0x53; // access address 0x53
msg[0].flags = 0; // 0 means write
msg[0].len = SIZE; // size is already set to two
msg[0].buf = outbuf
msgset.msgs = msg;
msgset.nmsgs = 1;
ioctl( file, I2C_RDWR, &msgset ); // fille is already assigned, etc.
we would write one message containing two bytes to address 0x53!?
Or we could say,
S Addr Wr [A] Data [A] Data [A] P
in the way like its done here.
But when i look at my scope, i get something like this:
or a litle more detailed:
But this is not what we want and not what the specification says,
furthermore we get
S Addr Wr [A] Data P S Addr Wr [A] Data P
Does anyone know this behavior or could describe it to me?
I tried all types of calls IOCTL, SMBUS, write_block_data.
Everytime there is a new Start Condition between data-bytes and the address is also repeated!
Am I getting something wrong?
Thanks for your time and best Regards!
Befedo
I found the misalignment...
my hardware was set up like this:
Notebook -> DP/VGA -> I2C-Slave
I used an Notebook which only got an DisplayPort output, converted this via an DP-to-VGA adapter and used the I2C-Interface where a simple Slave was attached.
And it looks like the DP-to-VGA adapter only could serve with Bytewide-Access to the I2C-Bus, so I set up a 'new' Laptop which has an VGA-Interface integrated and used it directly...
Which lead to an perfectly aligned transfer, like it was expected.
Related
I have attached eBPF XDP program(port_filter_kern.c) in my network Interface.
port_filter_kern.c - It will drop the incoming traffic which comes to the specific ports(port numbers are present in "port_map" as key).
port_filter_user.c - It will load my eBPF program to the given interface and update the eBPF map "port_map" after reading the text file(which has port numbers).
map_fd = bpf_object__find_map_fd_by_name(obj, "port_map");
printf("map_fd %d ",map_fd); //to see the map fd integer
int result = bpf_map_update_elem(map_fd, &portkey, &value, BPF_ANY);
Now, I want to access the same map "port_map" using another user space program (port_filter_runtime.c) which will get the port numbers from user/text file during run time and need to update the same map "port_map", to drop the incoming traffic which comes to the newly given port number.
I have tried below ways to find same map FD. I didnt get correct FD,(verified through the FD, which is printed in first user space program port_filter_user.c).
struct bpf_object *obj = bpf_object__open_file("port_filter_kern.o", NULL);
struct bpf_map *map = bpf_object__find_map_by_name(obj, "port_map");
int map_fd = bpf_map__fd(map);
printf("map_fd %d ",map_fd); //to see the map fd integer
and tried with below code also,
struct bpf_object *obj = bpf_object__open_file("port_filter_kern.o", NULL);
int map_fd = bpf_object__find_map_fd_by_name(obj, "port_map");
printf("map_fd %d ",map_fd); //to see the map fd integer
If I gets the same map FD, I can use that to update my map.
Any guidance? Thanks in Advance...
The file descriptor is an integer value that only makes sense in the context of its process. You cannot just share the value with any process and expect that it will point to the same resource.
Typically you would share a reference between processes by pinning the map (in the user space program that created it) to the bpffs (/sys/fs/bpf/), then retrieving the file descriptor in the other program from the pinned path with a bpf() syscall (see for example int bpf_obj_pin(int fd, const char *pathname), and then int bpf_obj_get(const char *pathname) from libbpf).
Once you have the file descriptor in your second process, you can assign it to the map in the struct bpf_object with libbpf's bpf_map__reuse_fd().
I am trying to learn how to write a basic SPI driver and below is the probe function that I wrote.
What I am trying to do here is setup the spi device for fram(datasheet) and use the spi_sync_transfer()api description to get the manufacturer's id from the chip.
When I execute this code, I can see the data on the SPI bus using logic analyzer but I am unable to read it using the rx buffer. Am I missing something here? Could someone please help me with this?
static int fram_probe(struct spi_device *spi)
{
int err;
unsigned char ch16[] = {0x9F,0x00,0x00,0x00};// 0x9F => 10011111
unsigned char rx16[] = {0x00,0x00,0x00,0x00};
printk("[FRAM DRIVER] fram_probe called \n");
spi->max_speed_hz = 1000000;
spi->bits_per_word = 8;
spi->mode = (3);
err = spi_setup(spi);
if (err < 0) {
printk("[FRAM DRIVER::fram_probe spi_setup failed!\n");
return err;
}
printk("[FRAM DRIVER] spi_setup ok, cs: %d\n", spi->chip_select);
spi_element[0].tx_buf = ch16;
spi_element[1].rx_buf = rx16;
err = spi_sync_transfer(spi, spi_element, ARRAY_SIZE(spi_element)/2);
printk("rx16=%x %x %x %x\n",rx16[0],rx16[1],rx16[2],rx16[3]);
if (err < 0) {
printk("[FRAM DRIVER]::fram_probe spi_sync_transfer failed!\n");
return err;
}
return 0;
}
spi_element is not declared in this example. You should show that and also how all elements of that are array are filled. But just from the code that's there I see a couple mistakes.
You need to set the len parameter of spi_transfer. You've assigned the TX or RX buffer to ch16 or rx16 but not set the length of the buffer in either case.
You should zero out all the fields not used in the spi_transfer.
If you set the length to four, you would not be sending the proper command according to the datasheet. RDID expects a one byte command after which will follow four bytes of output data. You are writing a four byte command in your first transfer and then reading four bytes of data. The tx_buf in the first transfer should just be one byte.
And finally the number of transfers specified as the last argument to spi_sync_transfer() is incorrect. It should be 2 in this case because you have defined two, spi_element[0] and spi_element[1]. You could use ARRAY_SIZE() if spi_element was declared for the purpose of this message and you want to sent all transfers in the array.
Consider this as a way to better fill in the spi_transfers. It will take care of zeroing out fields that are not used, defines the transfers in a easy to see way, and changing the buffer sizes or the number of transfers is automatically accounted for in remaining code.
const char ch16[] = { 0x8f };
char rx16[4];
struct spi_transfer rdid[] = {
{ .tx_buf = ch16, .len = sizeof(ch16) },
{ .rx_buf = rx16, .len = sizeof(rx16) },
};
spi_transfer(spi, rdid, ARRAY_SIZE(rdid));
Since you have a scope, be sure to check that this operation happens under a single chip select pulse. I have found more than one Linux SPI driver to have a bug that pulses chip select when it should not. In some cases switching from TX to RX (like done above) will trigger a CS pulse. In other cases a CS pulse is generated for every word (8 bits here) of data.
Another thing you should change is use dev_info(&spi->dev, "device version %d", id)' and also dev_err() to print messages. This inserts the device name in a standard way instead of your hard-coded non-standard and inconsistent "[FRAME DRIVER]::" text. And sets the level of the message as appropriate.
Also, consider supporting device tree in your driver to read device properties. Then you can do things like change the SPI bus frequency for this device without rebuilding the kernel driver.
What is the "right" way to stuff an arbitrary, odd sized struct into a swift 3 Data object ?
I think that I have got there, but it seems horribly convoluted for what from prior experience was no than
dataObject.append(&structInstance, sizeof(structInstance))
My case is as follows:
The structure of interest:
public struct CutEntry {
var itemA : UInt64
var itemB : UInt32
}
I have an array of these things that I want to stuff into a data object, in a specific manner as the data object becomes a file which is eventually read by a different application on a different architecture.
The function to put them into a Data object
open func encodeCutsData() -> Data
{
var data = Data()
for entry in cutsArray
{
// bigendian stuff, as a var, just so the you can get the address
var entryCopy = CutEntry(itemA: entry.itemA.bigEndian, itemB: entry.itemB.bigEndian)
// step 1 get the address of the item as a UnsafePointer
let d2 = withUnsafePointer(to: &entryCopy) { return $0}
// step 2 cast it to a raw pointer
let d3 = UnsafeRawPointer(d2)
// step 3 create a temp data object
let d4 = Data(bytes:d3, count: MemoryLayout<CutEntry>.size )
// step 4 add the temp to main data object
data.append(d4)
}
return data
}
Earlier when we only had NSMutableData it was
let item = NSMutableData()
for entry in cutsArray
{
var entryCopy = CutEntry(cutPts: entry.cutPts.bigEndian, cutType: entry.cutType.bigEndian)
item.append(&entryCopy, length: MemoryLayout<CutEntry>.size)
}
I've spent a few hours searching for examples of manipulating struct and Data objects. I though that I was close when I found references to unsafebufferpointer. That blew up in my face when I discovered that "buffer" bit uses core memory alignment (which can be useful) and it was stuffing 16 bytes into the data object instead of the expected 12.
I am quite prepared to say that I have missed the blindingly obvious bit of RTFM somewhere. Can anyone offer a cleaner solution ? or has Swift really gone backwards here ?
If I could find a way of getting a pointer to the item as a UInt8 pointer that would remove a couple of lines, but that looks just a difficult.
With checking the reference of Data, I can find two things which may be useful for you:
init(bytes: UnsafeRawPointer, count: Int)
func append(Data)
You can write something like this:
var data = Data()
for entry in cutsArray {
var entryCopy = CutEntry(cutPts: entry.cutPts.bigEndian, cutType: entry.cutType.bigEndian)
data.append(Data(bytes: &entryCopy, count: MemoryLayout<CutEntry>.size))
}
I'm trying to manage value block with a Mifare Classic and PN532 reader.
I'm using an open source library named "libnfc" but I do not see anything related to value blocks in this library.
Does anyone know how I could make increment, decrement and transfer calls with this reader & library?
Have a look at the header utils/mifare.h (and its associated implementation utils/mifare.c). They contain an implementation of the MIFARE reader commands. For instance, for the increment command, you would use something like:
mp.mpv.abtValue[0] = 1;
mp.mpv.abtValue[1] = 0;
mp.mpv.abtValue[2] = 0;
mp.mpv.abtValue[3] = 0;
nfc_initiator_mifare_cmd(pnd, MC_INCREMENT, blockNumber, &mp);
Where pnd is a nfc_device *, mp is a mifare_param and you previously authenticated to that sector (see utils/nfc-mfclassic.c).
I would like to program in threading building blocks with tasks. But how does one do the debugging in practice?
In general the print method is a solid technique for debugging programs.
In my experience with MPI parallelization, the right way to do logging is that each thread print its debugging information in its own file (say "debug_irank" with irank the rank in the MPI_COMM_WORLD) so that the logical errors can be found.
How can something similar be achieved with TBB? It is not clear how to access the thread number in the thread pool as this is obviously something internal to tbb.
Alternatively, one could add an additional index specifying the rank when a task is generated but this makes the code rather complicated since the whole program has to take care of that.
First, get the program working with 1 thread. To do this, construct a task_scheduler_init as the first thing in main, like this:
#include "tbb/tbb.h"
int main() {
tbb::task_scheduler_init init(1);
...
}
Be sure to compile with the macro TBB_USE_DEBUG set to 1 so that TBB's checking will be enabled.
If the single-threaded version works, but the multi-threaded version does not, consider using Intel Inspector to spot race conditions. Be sure to compile with TBB_USE_THREADING_TOOLS so that Inspector gets enough information.
Otherwise, I usually first start by adding assertions, because the machine can check assertions much faster than I can read log messages. If I am really puzzled about why an assertion is failing, I use printfs and task ids (not thread ids). Easiest way to create a task id is to allocate one by post-incrementing a tbb::atomic<size_t> and storing the result in the task.
If I'm having a really bad day and the printfs are changing program behavior so that the error does not show up, I use "delayed printfs". Stuff the printf arguments in a circular buffer, and run printf on the records later after the failure is detected. Typically for the buffer, I use an array of structs containing the format string and a few word-size values, and make the array size a power of two. Then an atomic increment and mask suffices to allocate slots. E.g., something like this:
const size_t bufSize = 1024;
struct record {
const char* format;
void *arg0, *arg1;
};
tbb::atomic<size_t> head;
record buf[bufSize];
void recf(const char* fmt, void* a, void* b) {
record* r = &buf[head++ & bufSize-1];
r->format = fmt;
r->arg0 = a;
r->arg1 = b;
}
void recf(const char* fmt, int a, int b) {
record* r = &buf[head++ & bufSize-1];
r->format = fmt;
r->arg0 = (void*)a;
r->arg1 = (void*)b;
}
The two recf routines record the format and the values. The casting is somewhat abusive, but on most architectures you can print the record correctly in practice with printf(r->format, r->arg0, r->arg1) even if the the 2nd overload of recf created the record.
~
~