Python3 open buffering argument looks strange - python-3.x

From the doc
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to
select line buffering (only usable in text mode), and an integer > 1
to indicate the size in bytes of a fixed-size chunk buffer. When no
buffering argument is given, the default buffering policy works as
follows:
Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
many systems, the buffer will typically be 4096 or 8192 bytes long.
“Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
for binary files.
I open a file named test.log with text mode, and set the buffering to 16. So I think the chunk size is 16, and when I write 32 bytes string to the file. It will call write(syscall) twice. But acutally, it only call once.(test in Python 3.7.2 GCC 8.2.1 20181127 on Linux)
import os
try:
os.unlink('test.log')
except Exception:
pass
with open('test.log', 'a', buffering=16) as f:
for _ in range(10):
f.write('a' * 32)
Using strace -e write python3 test.py to trace syscall, and get following
write(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 320) = 320
What does the buffering means?

This answer is valid for CPython 3.7 other implementations of Python can differ.
The open() function in text mode returns _io.TextIOWrapper(). The _io.TextIOWrapper() has internal 'buffer' called pending_bytes with size of 8192 bytes (it is hard coded) and it also have handle on _io.BufferedWriter() for text mode w or _io.BufferedRandom() for text mode a. The size of _io.BufferedWriter()/_io.BufferedRandom() is specified by the argument buffering in the open() function.
When you call into _io.TextIOWrapper().write("some text") it will add the text into internal pending_bytes buffer. After some writes you will fill the pending_bytes buffer and then it will be written into buffer inside _io.BufferedWriter(). When you fill up also the buffer inside _io.BufferedWriter() then it will be written into target file.
When you open file in binary mode you will get directly the _io.BufferedWriter()/_io.BufferedRandom() object initialized with buffer size from buffering parametr.
Let's look at some examples. I will start with simpler one using binary mode.
# Case 1
with open('test.log', 'wb', buffering=16) as f:
for _ in range(5):
f.write(b'a'*15)
strace output:
write(3, "aaaaaaaaaaaaaaa", 15) = 15
write(3, "aaaaaaaaaaaaaaa", 15) = 15
write(3, "aaaaaaaaaaaaaaa", 15) = 15
write(3, "aaaaaaaaaaaaaaa", 15) = 15
write(3, "aaaaaaaaaaaaaaa", 15) = 15
In the first iteration it fill buffer with 15 bytes. In the second iteration it discovers that adding another 15 bytes would overflow the buffer so it first flush it (calls system write) and then save those new 15 bytes. In next iteration the same happens again. After last iteration in the buffer is 15 B which are written on close of the file (leaving the with context).
The second case, I will try write into buffer more data than the buffer's size:
# Case 2
with open('test.log', 'wb', buffering=16) as f:
for _ in range(5):
f.write(b'a'*17)
strace output:
write(3, "aaaaaaaaaaaaaaaaa", 17) = 17
write(3, "aaaaaaaaaaaaaaaaa", 17) = 17
write(3, "aaaaaaaaaaaaaaaaa", 17) = 17
write(3, "aaaaaaaaaaaaaaaaa", 17) = 17
write(3, "aaaaaaaaaaaaaaaaa", 17) = 17
What happens here is that in the first iteration it will try write into buffer 17 B but it cannot fit there so it is directly written into the file and buffer stays empty. This applies for every iteration.
Now let's look at the text mode.
# Case 3
with open('test.log', 'w', buffering=16) as f:
for _ in range(5):
f.write('a'*8192)
strace output:
write(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384) = 16384
write(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384) = 16384
write(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 8192) = 8192
First recall that pending_bytes has size 8192 B. In the first iteration it writes 8192 bytes (from code: 'a'*8192) into pending_bytes buffer. In the second iteration it adds to the pending_buffer another 8192 bytes and discovers it is more than 8192 (size of pending_bytes buffer) and writes it into underlying _io.BufferedWriter(). The buffer in _io.BufferedWriter() has size 16 B (buffering parameter) so it will immediately writes into file (same as case 2). Now the pending_buffer is empty and in the third iteration it's again filled with 8192 B. In the fourth iteration it adds another 8192 B pending_bytes buffer overflows and it again written directly into file as in the second iteration. In the last iteration it adds 8192 B into pending_bytes buffer which is flushed when the files is closed.
Last example contains buffering bigger than 8192 B. Also for better explanation I added 2 more iterations.
# Case 4
with open('test.log', 'w', buffering=30000) as f:
for _ in range(7):
f.write('a'*8192)
strace output:
write(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384) = 16384
write(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 16384) = 16384
write(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 24576) = 24576
Iterations:
Add 8192 B into pending_bytes.
Add 8192 B into pending_bytes but it is more than maximal size so it is written into underlying _io.BufferedWritter() and it stays there (pending_bytes is empty now).
Add 8192 B into pending_bytes.
Add 8192 B into pending_bytes but it is more than maximal size so it tries to write into into underlying _io.BufferedWritter(). But it would exceed maximal capacity of the underlying buffer cause 16384 + 16384 > 30000 (first 16384 B are still there from iteration 2) so it first writes the old 16384 B into file and then puts those new 16384 B (from pending_bytes) into buffer. (Now again the pending_bytes buffer is empty)
Same as 3
Same as 4
Currently pending_buffer is empty and _io.BufferedWritter() contains 16384 B. In this iteration it fills pending_buffer with 8192 B. And that's it.
When the program leave with section it close the file. The process of closing follows:
Writes 8192 B from pending_buffer into _io.BufferedWriter() (it is possible cause 8192 + 16384 < 30000)
Writes (8192 + 16384=) 24576 B into file.
Close the file descriptor.
Btw currently I have no idea why is there that pending_buffer when it can use for buffering the underlying buffer from _io.BufferedWritter(). My best guess is it's there because it improve performance with files working in text mode.

Related

determining the optimal buffer size for file read in linux

I am writing a C program which reads from stdin and writes to stdout. But it buffers the data so that a write is performed only after it reads a specific number of bytes(=SIZE)
#include<stdio.h>
#include<stdlib.h>
#define SIZE 100
int main()
{
char buf[SIZE];
int n=0;
//printf("Block size = %d\n", BUFSIZ);
while( ( n = read(0, buf, sizeof(buf)) ) > 0 )
write(1, buf, n);
exit(0);
}
Iam running this program on a Ubuntu 18.04 hosted on Oracle Virtual Box(4GB RAM, 2 cores), and testing the program for different values of buffer size. I have redirected the standard input to come from a file(which contains random numbers created dynamically) and standard output to go to /dev/null. Here is the shell script used to run the test:
#!/bin/bash
# $1 - step size (bytes)
# $2 - start size (bytes)
# $3 - stop size (bytes)
echo "Changing buffer size from $2 to $3 in steps of $1, and measuring time for copying."
buff_size=$2
echo "Test Data" >testData
echo "Step Size:(doubles from previous size) Start Size:$2 Stop Size:$3" >>testData
while [ $buff_size -le $3 ]
do
echo "" >>testData
echo -n "$buff_size," >>testData
gcc -DSIZE=$buff_size copy.c # Compile the program for cat, with new buffer size
dd bs=1000 count=1000000 </dev/urandom >testFile #Create testFile with random data of 1GB
(/usr/bin/time -f "\t%U, \t%S," ./a.out <testFile 1>/dev/null) 2>>testData
buff_size=$(($buff_size * 2))
rm -f a.out
rm -f testFile
done
I am measuring the time taken to execute the program and tabulate it. A test run produces the following data:
Test Data
Step Size:(doubles from previous size) Start Size:1 Stop Size:524288
1, 5.94, 17.81,
2, 5.53, 18.37,
4, 5.35, 18.37,
8, 5.58, 18.78,
16, 5.45, 18.96,
32, 5.96, 19.81,
64, 5.60, 18.64,
128, 5.62, 17.94,
256, 5.37, 18.33,
512, 5.70, 18.45,
1024, 5.43, 17.45,
2048, 5.22, 17.95,
4096, 5.57, 18.14,
8192, 5.88, 17.39,
16384, 5.39, 18.64,
32768, 5.27, 17.78,
65536, 5.22, 17.77,
131072, 5.52, 17.70,
262144, 5.60, 17.40,
524288, 5.96, 17.99,
I dont see any significant variation in user+system time as we use a different block size. But theoretically, as the block size becomes smaller, many number of system calls are generated for the same file size, and it should take more time to execute. I have seen test results in the book 'Advanced Programming in Unix Environment' by Richard Stevens for a similar test, which shows that user+system time reduces significantly if the buffer size used in copy is close to block size.(In my case, block size is 4096 bytes on an ext4 partition)
Why am i not able to reproduce these results? Am i missing some factors in these tests?
You did not disable the line #define SIZE 100 in your source code so the definition via option (-DSIZE=1000) does have influence only above this #define. On my compiler I get a warning for this (<command-line>:0:0: note: this is the location of the previous definition) at compile time.
If you comment out the #define you should be able to fix this error.
Another aspect which comes to mind:
If you create a file on a machine and read it right away after that, it will be in the OS's disk cache (which is large enough to store all of this file), so the actual disk block size won't have much of an influence here.
Stevens's book was written in 1992 when RAM was way more expensive than today, so maybe some information in there is outdated. I also doubt that newer editions of the book have taken things like these out because in general they are still true.

Golang : fatal error: runtime: out of memory

I trying to use this package in Github for string matching. My dictionary is 4 MB. When creating the Trie, I got fatal error: runtime: out of memory. I am using Ubuntu 14.04 with 8 GB of RAM and Golang version 1.4.2.
It seems the error come from the line 99 (now) here : m.trie = make([]node, max)
The program stops at this line.
This is the error:
fatal error: runtime: out of memory
runtime stack:
runtime.SysMap(0xc209cd0000, 0x3b1bc0000, 0x570a00, 0x5783f8)
/usr/local/go/src/runtime/mem_linux.c:149 +0x98
runtime.MHeap_SysAlloc(0x57dae0, 0x3b1bc0000, 0x4296f2)
/usr/local/go/src/runtime/malloc.c:284 +0x124
runtime.MHeap_Alloc(0x57dae0, 0x1d8dda, 0x10100000000, 0x8)
/usr/local/go/src/runtime/mheap.c:240 +0x66
goroutine 1 [running]:
runtime.switchtoM()
/usr/local/go/src/runtime/asm_amd64.s:198 fp=0xc208518a60 sp=0xc208518a58
runtime.mallocgc(0x3b1bb25f0, 0x4d7fc0, 0x0, 0xc20803c0d0)
/usr/local/go/src/runtime/malloc.go:199 +0x9f3 fp=0xc208518b10 sp=0xc208518a60
runtime.newarray(0x4d7fc0, 0x3a164e, 0x1)
/usr/local/go/src/runtime/malloc.go:365 +0xc1 fp=0xc208518b48 sp=0xc208518b10
runtime.makeslice(0x4a52a0, 0x3a164e, 0x3a164e, 0x0, 0x0, 0x0)
/usr/local/go/src/runtime/slice.go:32 +0x15c fp=0xc208518b90 sp=0xc208518b48
github.com/mf/ahocorasick.(*Matcher).buildTrie(0xc2083c7e60, 0xc209860000, 0x26afb, 0x2f555)
/home/go/ahocorasick/ahocorasick.go:104 +0x28b fp=0xc208518d90 sp=0xc208518b90
github.com/mf/ahocorasick.NewStringMatcher(0xc208bd0000, 0x26afb, 0x2d600, 0x8)
/home/go/ahocorasick/ahocorasick.go:222 +0x34b fp=0xc208518ec0 sp=0xc208518d90
main.main()
/home/go/seme/substrings.go:66 +0x257 fp=0xc208518f98 sp=0xc208518ec0
runtime.main()
/usr/local/go/src/runtime/proc.go:63 +0xf3 fp=0xc208518fe0 sp=0xc208518f98
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc208518fe8 sp=0xc208518fe0
exit status 2
This is the content of the main function (taken from the same repo: test file)
var dictionary = InitDictionary()
var bytes = []byte(""Partial invoice (€100,000, so roughly 40%) for the consignment C27655 we shipped on 15th August to London from the Make Believe Town depot. INV2345 is for the balance.. Customer contact (Sigourney) says they will pay this on the usual credit terms (30 days).")
var precomputed = ahocorasick.NewStringMatcher(dictionary)// line 66 here
fmt.Println(precomputed.Match(bytes))
Your structure is awfully inefficient in terms of memory, let's look at the internals. But before that, a quick reminder of the space required for some go types:
bool: 1 byte
int: 4 bytes
uintptr: 4 bytes
[N]type: N*sizeof(type)
[]type: 12 + len(slice)*sizeof(type)
Now, let's have a look at your structure:
type node struct {
root bool // 1 byte
b []byte // 12 + len(slice)*1
output bool // 1 byte
index int // 4 bytes
counter int // 4 bytes
child [256]*node // 256*4 = 1024 bytes
fails [256]*node // 256*4 = 1024 bytes
suffix *node // 4 bytes
fail *node // 4 bytes
}
Ok, you should have a guess of what happens here: each node weighs more than 2KB, this is huge ! Finally, we'll look at the code that you use to initialize your trie:
func (m *Matcher) buildTrie(dictionary [][]byte) {
max := 1
for _, blice := range dictionary {
max += len(blice)
}
m.trie = make([]node, max)
// ...
}
You said your dictionary is 4 MB. If it is 4MB in total, then it means that at the end of the for loop, max = 4MB. It it holds 4 MB different words, then max = 4MB*avg(word_length).
We'll take the first scenario, the nicest one. You are initializing a slice of 4M of nodes, each of which uses 2KB. Yup, that makes a nice 8GB necessary.
You should review how you build your trie. From the wikipedia page related to the Aho-Corasick algorithm, each node contains one character, so there is at most 256 characters that go from the root, not 4MB.
Some material to make it right: https://web.archive.org/web/20160315124629/http://www.cs.uku.fi/~kilpelai/BSA05/lectures/slides04.pdf
The node type has a memory size of 2084 bytes.
I wrote a litte program to demonstrate the memory usage: https://play.golang.org/p/szm7AirsDB
As you can see, the three strings (11(+1) bytes in size) dictionary := []string{"fizz", "buzz", "123"} require 24 MB of memory.
If your dictionary has a length of 4 MB you would need about 4000 * 2084 = 8.1 GB of memory.
So you should try to decrease the size of your dictionary.
Set resource limit to unlimited worked for me
if ulimit -a return 0 run ulimit -c unlimited
Maybe set a real size limit to be more secure

psutil vs dd: monitoring disk I/O

I'm writing y.a.t. (yet-another-tool :)) for monitoring disk usage on Linux.
I'm using python 3.3.2 and psutil 3.3.0.
The process I'm monitoring does something really basic: I use the dd tool and I vary the block size (128, 512, 1024, 4096)
#!/bin/bash
dd if=./bigfile.txt of=./copy.img bs=4096
bigfile.txt:
$ stat bigfile.txt
File: ‘bigfile.txt’
Size: 87851423 Blocks: 171600 IO Block: 4096 regular file
And the snippet of the monitor is as follows:
def poll(interval, proc):
d_before = proc.io_counters()
time.sleep(interval)
tst = time.time()
d_after = proc.io_counters()
usage = OrderedDict.fromkeys(d_after.__dict__.keys())
for k, v in usage.items():
usage[k] = d_after.__dict__[k] - d_before.__dict__[k]
return tst, usage
At each run, I clear the cache (as suggested many times on stackoverflow):
rm copy.img && sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
My question is: why aren't the numbers matching?
bs=128:
dd:
686339+1 records in
686339+1 records out
87851423 bytes (88 MB) copied, 1.21664 s, 72.2 MB/s
monitor.py:
1450778750.104943 OrderedDict([('read_count', 686352), ('write_count', 686343), ('read_bytes', 87920640), ('write_bytes', 87855104)])
bs=4096
dd:
21448+1 records in
21448+1 records out
87851423 bytes (88 MB) copied, 0.223911 s, 392 MB/s
monitor.py:
1450779294.5541275 OrderedDict([('read_count', 21468), ('write_count', 21452), ('read_bytes', 88252416), ('write_bytes', 87855104)])
The difference is still there with all the values of bs.
Is it a matter of certains read/write not being counted? Does psutil performs some extra work? For example, with bs=4096, why in psutil 400993 more bytes (for read) and 3681 (for write) are reported?
Am I missing something big?
Thanks a lot.
EDIT: as an update, it doesn't matter the granularity of timings in the measurement, i.e., the time.sleep(interval) call. I tried with different values, and summing up the total number of reads and writes reported by psutil. The difference remains.
EDIT2: typo in snippet code
write_bytes
The read_bytes and write_bytes correspond to the same fields from /proc/<PID>/io. Quoting the documentation (emphasis mine):
read_bytes
----------
I/O counter: bytes read
Attempt to count the number of bytes which this process really did cause to
be fetched from the storage layer. Done at the submit_bio() level, so it is
accurate for block-backed filesystems.
write_bytes
-----------
I/O counter: bytes written
Attempt to count the number of bytes which this process caused to be sent to
the storage layer. This is done at page-dirtying time.
As you know, most (all?) filesystems are block-based. This implies that if you have a program that, say, writes just 5 bytes to a file, and if your block size if 4 KiB, then 4 KiB will be written.
If you don't trust dd, let's try with a simple Python script:
with open('something', 'wb') as f:
f.write(b'12345')
input('press Enter to exit')
This script should write only 5 bytes, but if we inspect /proc/<PID>/io, we can see that 4 KiB were written:
$ cat /proc/3455/io
rchar: 215317
wchar: 24
syscr: 66
syscw: 2
read_bytes: 0
write_bytes: 4096
cancelled_write_bytes: 0
This is the same thing that is happening with dd in your case.
You have asked dd to write 87851423 bytes. How many 4 KiB blocks are 87851423 bytes?
87851423 - (87851423 mod 4096) + 4096 = 87855104
Not by chance 87855104 is the number reported by psutil.
read_bytes
How about read_bytes? In theory we should have read_bytes equal to write_bytes, but actually read_bytes shows 16 more blocks in the first run, and 97 more blocks in the second run.
Well, first of all, let's see what files dd is actually reading:
$ strace -e trace=open,read -- dd if=/dev/zero of=zero bs=1M count=2
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\v\2\0\0\0\0\0"..., 832) = 832
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
open("/dev/zero", O_RDONLY) = 3
open("zero", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1048576) = 1048576
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1048576) = 1048576
open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 0
read(0, "# Locale name alias data base.\n#"..., 4096) = 2570
read(0, "", 4096) = 0
open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = 0
+++ exited with 0 +++
As you can see, dd is opening and reading the linker, the GNU C library, and locale files. It is reading more bytes than you can see above, because it's also using mmap, not just read.
The point is: dd reads many more files than the source file, therefore it's acceptable the read_bytes is much higher than write_bytes. But why is it inconsistent?
Those files that are read by dd are also used by many other programs. Even if you drop_caches just before executing dd, there are chances that some other process may reload one of these files into memory. You can try with this very simple C program:
int main()
{
while(1) {
}
}
Compiled with the default GCC options, this program does nothing except opening the linker and the GNU C library. If you try to drop_caches, execute the program and cat /proc/<PID>/IO more than once, you'll see that read_bytes will vary across runs (except if you perform the steps very fast, in which case the probability that some other program has loaded some files into the cache is low).

Can the logical erase block size of an MTD device be increased?

The minimum erase block size for jffs2 (mtd-utils version 1.5.0, mkfs.jffs2 revision 1.60) seems to be 8KiB:
Erase size 0x1000 too small. Increasing to 8KiB minimum
However I am running Linux 3.10 with an at25df321a,
m25p80 spi32766.0: at25df321a (4096 Kbytes),
and the erase block size is only 4KiB:
mtd5
Name: spi32766.0
Type: nor
Eraseblock size: 4096 bytes, 4.0 KiB
Amount of eraseblocks: 1024 (4194304 bytes, 4.0 MiB)
Minimum input/output unit size: 1 byte
Sub-page size: 1 byte
Character device major/minor: 90:10
Bad blocks are allowed: false
Device is writable: true
Is there a way to make the mtd system treat multiple erase blocks as one? Maybe some ioctl or module parameter?
If I flash a jffs2 image with larger erase block size, I get lots of kernel error messages, missing files and sometimes panic.
workaround
I found that flasherase --jffs2 results in a working filesystem inspite of the 4KiB erase block size. So I hacked the mkfs.jfss2.c file and the resulting image seems to work fine. I'll give it some testing.
diff -rupN orig/mkfs.jffs2.c new/mkfs.jffs2.c
--- orig/mkfs.jffs2.c 2014-10-20 15:43:31.751696500 +0200
+++ new/mkfs.jffs2.c 2014-10-20 15:43:12.623431400 +0200
## -1659,11 +1659,11 ## int main(int argc, char **argv)
}
erase_block_size *= units;
- /* If it's less than 8KiB, they're not allowed */
- if (erase_block_size < 0x2000) {
- fprintf(stderr, "Erase size 0x%x too small. Increasing to 8KiB minimum\n",
+ /* If it's less than 4KiB, they're not allowed */
+ if (erase_block_size < 0x1000) {
+ fprintf(stderr, "Erase size 0x%x too small. Increasing to 4KiB minimum\n",
erase_block_size);
- erase_block_size = 0x2000;
+ erase_block_size = 0x1000;
}
break;
}
http://lists.infradead.org/pipermail/linux-mtd/2010-September/031876.html
JFFS2 should be able to fit at least one node to eraseblock. The
maximum node size is 4KiB+few bytes. This is why the minimum
eraseblocks size is 8KiB.
But in practice, even 8KiB is bad because you and up with wasting a
lot of space at the end of eraseblocks.
You should join several erasblock into one virtual eraseblock of 64 or
128 KiB and use it - this will be more optimal.
Some drivers have already implemented this. I know about
MTD_SPI_NOR_USE_4K_SECTORS
Linux configuration option. It have to be set to "n" to enable large erase sectors of size 0x00010000.

Confused about "three successive writes: bytes 10, bytes 32, bytes 54"?

I am learning SMSC smc91cx driver code, and I learned how to program test code for smc91c111 nic by the instructions of Application Note 9-6. I cannot understand the following instructions under "Transmitting A Packet":
Write the destination address (three successive writes: bytes 10, bytes 32, bytes 54)
Write 0xFFFF, 0xFFFF, 0xFFFF
Write the source address (three successive writes: bytes 10, bytes32, bytes 54)
Write 0x0000, 0x0000, 0x0000
I cannot make sense of these instructions. Should I write 10 bytes size of 0xFF plus 32 bytes size plus 54 bytes size to the buffer, or just write 0xFF in 10th byte postion, 32th, 54th byte postion?
But if so, why would you write 0x0000 to the same position?
Rather than allocating several different registers to write to, that chip has you write to the same DATA register serially until you set all the info. The DATA register is 2 bytes wide, but a MAC address is 6 bytes, numbered 0-5. So you have to write it 2 bytes at a time: bytes number 1 and 0 first, followed by bytes number 3 and 2, then bytes number 5 and 4. Then write 0xFFFF 3 times to the DATA register, then repeat for the source address and the 0x0000s.

Resources