Can the logical erase block size of an MTD device be increased? - linux

The minimum erase block size for jffs2 (mtd-utils version 1.5.0, mkfs.jffs2 revision 1.60) seems to be 8KiB:
Erase size 0x1000 too small. Increasing to 8KiB minimum
However I am running Linux 3.10 with an at25df321a,
m25p80 spi32766.0: at25df321a (4096 Kbytes),
and the erase block size is only 4KiB:
mtd5
Name: spi32766.0
Type: nor
Eraseblock size: 4096 bytes, 4.0 KiB
Amount of eraseblocks: 1024 (4194304 bytes, 4.0 MiB)
Minimum input/output unit size: 1 byte
Sub-page size: 1 byte
Character device major/minor: 90:10
Bad blocks are allowed: false
Device is writable: true
Is there a way to make the mtd system treat multiple erase blocks as one? Maybe some ioctl or module parameter?
If I flash a jffs2 image with larger erase block size, I get lots of kernel error messages, missing files and sometimes panic.
workaround
I found that flasherase --jffs2 results in a working filesystem inspite of the 4KiB erase block size. So I hacked the mkfs.jfss2.c file and the resulting image seems to work fine. I'll give it some testing.
diff -rupN orig/mkfs.jffs2.c new/mkfs.jffs2.c
--- orig/mkfs.jffs2.c 2014-10-20 15:43:31.751696500 +0200
+++ new/mkfs.jffs2.c 2014-10-20 15:43:12.623431400 +0200
## -1659,11 +1659,11 ## int main(int argc, char **argv)
}
erase_block_size *= units;
- /* If it's less than 8KiB, they're not allowed */
- if (erase_block_size < 0x2000) {
- fprintf(stderr, "Erase size 0x%x too small. Increasing to 8KiB minimum\n",
+ /* If it's less than 4KiB, they're not allowed */
+ if (erase_block_size < 0x1000) {
+ fprintf(stderr, "Erase size 0x%x too small. Increasing to 4KiB minimum\n",
erase_block_size);
- erase_block_size = 0x2000;
+ erase_block_size = 0x1000;
}
break;
}

http://lists.infradead.org/pipermail/linux-mtd/2010-September/031876.html
JFFS2 should be able to fit at least one node to eraseblock. The
maximum node size is 4KiB+few bytes. This is why the minimum
eraseblocks size is 8KiB.
But in practice, even 8KiB is bad because you and up with wasting a
lot of space at the end of eraseblocks.
You should join several erasblock into one virtual eraseblock of 64 or
128 KiB and use it - this will be more optimal.
Some drivers have already implemented this. I know about
MTD_SPI_NOR_USE_4K_SECTORS
Linux configuration option. It have to be set to "n" to enable large erase sectors of size 0x00010000.

Related

Is there any limit in setting zram device disksize on Linux?

I'm trying to create a zram device on my target device. My target can not allocate memory if the zram disksize is above 100GB, but it's okay with the disksize of 50GB or less.
Is there any limit in setting zram device disksize on Linux? My target device only has 2GB of RAM memory.
I guess you can give a number up to UINT64_MAX - 4095 = 18446744073709547520 on a 64-bit platform.
https://github.com/torvalds/linux/blob/master/drivers/block/zram/zram_drv.h#L101
https://github.com/torvalds/linux/blob/master/drivers/block/zram/zram_drv.c#L1506
https://github.com/torvalds/linux/blob/master/drivers/block/zram/zram_drv.c#L901
So what we have:
... disksize_store(...) {
u64 disksize;
...
// ok, we can give at least UINT64_MAX here.
disksize = unsigned long long memparse(...);
// PAGE_ALIGN, PAGE_SIZE = 1<<12
disksize = PAGE_ALIGN(disksize)
= (((disksize)+((PAGE_SIZE)-1))&(~((typeof(disksize))(PAGE_SIZE)-1)))
= (disksize + ((1<<12)-1))&(~((1<<12)-1))
= (disksize + 4095) & 0xfffffffffffff000
// ^^^^^^^^^^^^^^^ this can overflow
// so max number is UINT64_MAX - 4095 so it doesn't overflow
// otherwise this macro will return 0
...
if (!zram_meta_alloc(..., disksize) {
...
return ...;
}
...
zram->disksize = disksize;
...
}
So let's see into zram_meta_alloc:
... zram_meta_alloc(..., disksize) {
...
num_pages = disksize >> PAGE_SHIFT;
// max num_pages = 0xfffffffffffff = UINT64_MAX >> PAGE_SHIFT
... = vzalloc(num_pages * sizeof(*zram->table));
// ^^^^^^^^^^^^^^^ this can overflow
...
}
vzallloc takes as argument unsigned long. ULONG_MAX should be UINT64_MAX on 64-bit platform. sizeof(*zram->table) is equal to sizeof(unsigned long) + sizeof(unsigned long) + [optional: + sizeof(ktime_t)] + padding (see here). Without padding, assuming 64-bit platform, sizeof(unsigned long) = 8 that should be equal to 8+8[+8] = 16 or 24. But anyway, maximum num_pages is equal to UINT64_MAX >> 12, so to overflow it on 64bit multiplication we would need sizeof(*zram->table) = 2^PAGE_SIZE = 4096, and that shouldn't happen (unless the compiler decides to give over 4000 bytes of padding into the zram->table struct). So we are left with UINT64_MAX - 4095.
So we are left, that the maximum number of disksize is UINT64_MAX-4095. If you give the disksize equal to UINT64_MAX - x, where 0 <= x < 4095, than because of PAGE_ALIGN macro, the disksize will be effectively set to 0. Probably this should be brought up to a kernel developer and they should modify the PAGE_ALIGN macro to support such numbers.
6 days ago to vzalloc calls the call to array_size was added to protect against overflow with this commit.
There is no limit but there is an overhead.
"Note that zram uses about 0.1% of the size of the disk when not in use so a huge zram is wasteful."
https://www.kernel.org/doc/Documentation/blockdev/zram.txt
Also disk_size is a virtual size purely dependent on the input and the compression ratio that receives via chosen alg. Disk-size is the max uncompressed size and general disk parameters.
The only 'actual' control is via mem_limit which is compressed size + disk & zram overheads.
Compression ratio is completely dependent on comp alg chosen from /proc/crypto as zlib & zstd are far more effective but are far slower. It is also very dependent on input as with text zlib & zstd can be over double that what lzo & lz4 will achieve.
If the input is already compressed any alg might garner little to zero compression and without a mem_limit could grab much precious memory from the system.
Mem_limit is the max you are prepared zram to grab from system and a disk-size any more than the compression ratio expected applied to mem_limit is likely a waste.
It will never get used but be part of the 0.1% empty creation overhead.
Maybe try https://github.com/StuartIanNaylor/zram-config

Golang : fatal error: runtime: out of memory

I trying to use this package in Github for string matching. My dictionary is 4 MB. When creating the Trie, I got fatal error: runtime: out of memory. I am using Ubuntu 14.04 with 8 GB of RAM and Golang version 1.4.2.
It seems the error come from the line 99 (now) here : m.trie = make([]node, max)
The program stops at this line.
This is the error:
fatal error: runtime: out of memory
runtime stack:
runtime.SysMap(0xc209cd0000, 0x3b1bc0000, 0x570a00, 0x5783f8)
/usr/local/go/src/runtime/mem_linux.c:149 +0x98
runtime.MHeap_SysAlloc(0x57dae0, 0x3b1bc0000, 0x4296f2)
/usr/local/go/src/runtime/malloc.c:284 +0x124
runtime.MHeap_Alloc(0x57dae0, 0x1d8dda, 0x10100000000, 0x8)
/usr/local/go/src/runtime/mheap.c:240 +0x66
goroutine 1 [running]:
runtime.switchtoM()
/usr/local/go/src/runtime/asm_amd64.s:198 fp=0xc208518a60 sp=0xc208518a58
runtime.mallocgc(0x3b1bb25f0, 0x4d7fc0, 0x0, 0xc20803c0d0)
/usr/local/go/src/runtime/malloc.go:199 +0x9f3 fp=0xc208518b10 sp=0xc208518a60
runtime.newarray(0x4d7fc0, 0x3a164e, 0x1)
/usr/local/go/src/runtime/malloc.go:365 +0xc1 fp=0xc208518b48 sp=0xc208518b10
runtime.makeslice(0x4a52a0, 0x3a164e, 0x3a164e, 0x0, 0x0, 0x0)
/usr/local/go/src/runtime/slice.go:32 +0x15c fp=0xc208518b90 sp=0xc208518b48
github.com/mf/ahocorasick.(*Matcher).buildTrie(0xc2083c7e60, 0xc209860000, 0x26afb, 0x2f555)
/home/go/ahocorasick/ahocorasick.go:104 +0x28b fp=0xc208518d90 sp=0xc208518b90
github.com/mf/ahocorasick.NewStringMatcher(0xc208bd0000, 0x26afb, 0x2d600, 0x8)
/home/go/ahocorasick/ahocorasick.go:222 +0x34b fp=0xc208518ec0 sp=0xc208518d90
main.main()
/home/go/seme/substrings.go:66 +0x257 fp=0xc208518f98 sp=0xc208518ec0
runtime.main()
/usr/local/go/src/runtime/proc.go:63 +0xf3 fp=0xc208518fe0 sp=0xc208518f98
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc208518fe8 sp=0xc208518fe0
exit status 2
This is the content of the main function (taken from the same repo: test file)
var dictionary = InitDictionary()
var bytes = []byte(""Partial invoice (€100,000, so roughly 40%) for the consignment C27655 we shipped on 15th August to London from the Make Believe Town depot. INV2345 is for the balance.. Customer contact (Sigourney) says they will pay this on the usual credit terms (30 days).")
var precomputed = ahocorasick.NewStringMatcher(dictionary)// line 66 here
fmt.Println(precomputed.Match(bytes))
Your structure is awfully inefficient in terms of memory, let's look at the internals. But before that, a quick reminder of the space required for some go types:
bool: 1 byte
int: 4 bytes
uintptr: 4 bytes
[N]type: N*sizeof(type)
[]type: 12 + len(slice)*sizeof(type)
Now, let's have a look at your structure:
type node struct {
root bool // 1 byte
b []byte // 12 + len(slice)*1
output bool // 1 byte
index int // 4 bytes
counter int // 4 bytes
child [256]*node // 256*4 = 1024 bytes
fails [256]*node // 256*4 = 1024 bytes
suffix *node // 4 bytes
fail *node // 4 bytes
}
Ok, you should have a guess of what happens here: each node weighs more than 2KB, this is huge ! Finally, we'll look at the code that you use to initialize your trie:
func (m *Matcher) buildTrie(dictionary [][]byte) {
max := 1
for _, blice := range dictionary {
max += len(blice)
}
m.trie = make([]node, max)
// ...
}
You said your dictionary is 4 MB. If it is 4MB in total, then it means that at the end of the for loop, max = 4MB. It it holds 4 MB different words, then max = 4MB*avg(word_length).
We'll take the first scenario, the nicest one. You are initializing a slice of 4M of nodes, each of which uses 2KB. Yup, that makes a nice 8GB necessary.
You should review how you build your trie. From the wikipedia page related to the Aho-Corasick algorithm, each node contains one character, so there is at most 256 characters that go from the root, not 4MB.
Some material to make it right: https://web.archive.org/web/20160315124629/http://www.cs.uku.fi/~kilpelai/BSA05/lectures/slides04.pdf
The node type has a memory size of 2084 bytes.
I wrote a litte program to demonstrate the memory usage: https://play.golang.org/p/szm7AirsDB
As you can see, the three strings (11(+1) bytes in size) dictionary := []string{"fizz", "buzz", "123"} require 24 MB of memory.
If your dictionary has a length of 4 MB you would need about 4000 * 2084 = 8.1 GB of memory.
So you should try to decrease the size of your dictionary.
Set resource limit to unlimited worked for me
if ulimit -a return 0 run ulimit -c unlimited
Maybe set a real size limit to be more secure

how to get disk read/write bytes per second from /proc in programming on linux?

purpose :i want to get information like iostat command can get .
I have already known that if open /proc/diskstats or /sys/block/sdX/stat there are information that :sectors read and sectors write. So if i want to get read/write bytes/s ,the following formula is right ?
read/write bytes per second:
(sectors read/write(now)-sectors read/write(last))*512 bytes/time interval
read /write operations per second :
(read/write IOs(now)+read/write merges(now)-read/write IOs(last)-read/write merges(last ))/time interval
So if i have a timer that every second control software read the information from those two files ,and then using the above formula to calculate the value .Can i get the correct answer ?
TLDR Sector is 512 bytes (octets; 1 sector is 512 bytes; each bytes is 8 bits; every bit is either 0 or 1, but not superposition of them).
"The standard sector size of 512 bytes for magnetic disks was established ....[dubious – discuss] " (c) wiki https://en.wikipedia.org/wiki/Disk_sector
How to check sector size for io statistics (in /proc) in linux:
Check how iostat tool works (it shows kilobyte per second when started as iostat 1) - it is part of sysstat package:
https://github.com/sysstat/sysstat/blob/master/iostat.c
* Read stats from /proc/diskstats.
void read_diskstats_stat(int curr)
...
/* major minor name rio rmerge rsect ruse wio wmerge wsect wuse running use aveq */
i = sscanf(line, "%u %u %s %lu %lu %lu %lu %lu %lu %lu %u %u %u %u",
&major, &minor, dev_name,
&rd_ios, &rd_merges_or_rd_sec, &rd_sec_or_wr_ios, &rd_ticks_or_wr_sec,
&wr_ios, &wr_merges, &wr_sec, &wr_ticks, &ios_pgr, &tot_ticks, &rq_ticks);
if (i == 14) {
....
sdev.rd_sectors = rd_sec_or_wr_ios;
....
sdev.wr_sectors = wr_sec;
....
* #fctr Conversion factor.
...
if (DISPLAY_KILOBYTES(flags)) {
printf(" kB_read/s kB_wrtn/s kB_read kB_wrtn\n");
*fctr = 2;
}
...
/* rrq/s wrq/s r/s w/s rsec wsec rqsz qusz await r_await w_await svctm %util */
... 4 columns skipped
cprintf_f(4, 8, 2,
S_VALUE(ioj->rd_sectors, ioi->rd_sectors, itv) / fctr,
S_VALUE(ioj->wr_sectors, ioi->wr_sectors, itv) / fctr,
So, read sector count and divide by two to get kilobyte/s (seems like 1 sector read is 0.5 kb read; 2 sector read is 1 kb read and so on). We can conclude that the sector is always 512 bytes. Same is stated in the doc, isn't it?:
internet search for "/proc/diskstats" ->
https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats ->
https://www.kernel.org/doc/Documentation/iostats.txt "I/O statistics fields" by ricklind from usa's ibm
Field 3 -- # of sectors read
This is the total number of sectors read successfully.
Field 7 -- # of sectors written
This is the total number of sectors written successfully.
No info about sector size here (why?). Is the source code being the best documentation (it may be)? The writer of /proc/diskstats is in kernel sources in file block/genhd.c, function diskstats_show:
http://lxr.free-electrons.com/source/block/genhd.c?v=4.4#L1149
1170 seq_printf(seqf, "%4d %7d %s %lu %lu %lu "
1171 "%u %lu %lu %lu %u %u %u %u\n",
...
1176 part_stat_read(hd, sectors[READ]),
...
1180 part_stat_read(hd, sectors[WRITE]),
Structure sectors is defined in http://lxr.free-electrons.com/source/include/linux/genhd.h?v=4.4#L82
82 struct disk_stats {
83 unsigned long sectors[2]; /* READs and WRITEs */
It is read with part_stat_read and written with __part_stat_add
http://lxr.free-electrons.com/source/include/linux/genhd.h?v=4.4#L307
Adding to the sectors counter ... is... at http://lxr.free-electrons.com/source/block/blk-core.c?v=4.4#L2264
2264 void blk_account_io_completion(struct request *req, unsigned int bytes)
2265 {
2266 if (blk_do_io_stat(req)) {
2267 const int rw = rq_data_dir(req);
2268 struct hd_struct *part;
2269 int cpu;
2270
2271 cpu = part_stat_lock();
2272 part = req->part;
2273 part_stat_add(cpu, part, sectors[rw], bytes >> 9);
2274 part_stat_unlock();
2275 }
2276 }
It uses hard-coded "bytes >> 9" to compute sector size from request size in bytes (why round down??) or for human, not no-floating-point compiler, it is the same as bytes / 512.
There is also blk_rq_sectors function (unused here...) to get sector count from request, which does the same >>9 from bytes to sectors
http://lxr.free-electrons.com/source/include/linux/blkdev.h?v=4.4#L853
841 static inline unsigned int blk_rq_bytes(const struct request *rq)
842 {
843 return rq->__data_len;
844 }
853 static inline unsigned int blk_rq_sectors(const struct request *rq)
854 {
855 return blk_rq_bytes(rq) >> 9;
856 }
Authors of FS/VFS subsystem in Linux says in reply to https://lkml.org/lkml/2015/8/17/234 "Why is SECTOR_SIZE = 512 inside kernel ?" (2015):
#define SECTOR_SHIFT 9
Message https://lkml.org/lkml/2015/8/17/269 by Theodore Ts'o:
It's cast in stone. There are too many places all over the kernel,
especially in a huge number of file systems, which assume that the
sector size is 512 bytes. So above the block layer, the sector size
is always going to be 512.
This is actually better for user space programs using
/proc/diskstats, since they don't need to know whether a particular
underlying hardware is using 512, 4k, (or if the HDD manufacturers
fantasies become true 32k or 64k) sector sizes.
For similar reason, st_blocks in struct size is always in units of 512
bytes. We don't want to force userspace to have to figure out whether
the underlying file system is using 1k, 2k, or 4k. For that reason
the units of st_blocks is always going to be 512 bytes, and this is
hard-coded in the POSIX standard.

Accessing large memory (32 GB) using /dev/zero

I want to use /dev/zero for storing lots of temporary data (32 GB or around that). I am doing this:
fd = open("/dev/zero", O_RDWR );
// <Exit on error>
vbase = (uint64_t*) mmap(NULL, MEMSIZE, PROT_READ|PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, fd, 0);
// <Exit on error>
ftruncate(fd, (off_t) MEMSIZE);
I am changing MEMSIZE from 1GB to 32 GB (performing a memtest) to see if I can really access all that range. I am running out of memory at 1 GB.
Is there something I am missing ? Am I mmap'ing correctly ?
Or am I running into some system limit ? How can I check if this is happening ?
P.S: I run many programs that generate many gigs of data within a single file, so I dont know if there is an artificial upper limit, just that I seem to be running into something.
I have to admit I'm confused about what you're actually trying to do. Anyway, a couple of reason why what you do might not work:
From the mmap(2) manpage: "MAP_ANONYMOUS
The mapping is not backed by any file; its contents are initialized to zero. The fd and offset arguments are ignored;"
From the null(4) manpage: "Data written to a null or zero special file is discarded."
So anyway, before MAP_ANONYMOUS, mmap'ing /dev/zero was sometimes used to get anonymous (i.e. not backed by any file) memory. No need to do both. In either case, actually writing to all that memory implies that you need some kind of backing store for it, either physical memory or swap space. If you cannot guarantee that, maybe it's better to mmap() a real file on a filesystem with enough space?
Look into Linux kernel mmap implementation:
vm_mmap vm_mmap_pgoff  do_mmap_pgoff  mmap_region  file->f_op->mmap(file, vma)
In the function do_mmap_pgoff, it checks the max_map_count
if (mm->map_count > sysctl_max_map_count)
return -ENOMEM;
root> sysctl -a | grep map_count
vm.max_map_count = 65530
In the function mmap_region, it checks the process virtual address limit (whether it is unlimited).
int may_expand_vm(struct mm_struct *mm, unsigned long npages)
{
unsigned long cur = mm->total_vm; /* pages */
unsigned long lim;
lim = rlimit(RLIMIT_AS) >> PAGE_SHIFT;
if (cur + npages > lim)
return 0;
return 1;
}
root> ulimit -a | grep virtual
virtual memory (kbytes, -v) unlimited
In linux kernel, init task has the rlimit setting by default.
[RLIMIT_AS] = { RLIM_INFINITY, RLIM_INFINITY }, \
#ifndef RLIM_INFINITY
# define RLIM_INFINITY (~0UL)
#endif
In order to prove it, use the test_mem program
tmp> ./test_mem
RLIMIT_AS limit got sucessfully:
soft_limit=4294967295, hard_limit=4294967295
RLIMIT_DATA limit got sucessfully:
soft_limit=4294967295, hard_limit=4294967295
struct rlimit rl;
int ret;
ret = getrlimit(RLIMIT_AS, &rl);
if (ret == 0) {
printf("RLIMIT_AS limit got sucessfully:\n");
printf("soft_limit=%lld, hard_limit=%lld\n", (long long)rl.rlim_cur, (long long)rl.rlim_max);
}
That means unlimited means 0xFFFFFFFF for 32bit app in the 64bit OS. Change the shell virtual address limit, it could reflect correctly.
root> ulimit -v 1024000
tmp> ./test_mem
RLIMIT_AS limit got sucessfully:
soft_limit=1048576000, hard_limit=1048576000
RLIMIT_DATA limit got sucessfully:
soft_limit=4294967295, hard_limit=4294967295
In mmap_region, there is an accountable check
accountable_mapping  security_vm_enough_memory_mm  cap_vm_enough_memory  __vm_enough_memory  overcommit/swap/admin and user reserve handling.
Please follow the three steps to check whether they can meet.

How to correctly calculate address spaces?

Below is an example of a question given on my last test in a Computer Engineering course. Anyone mind explaining to me how to get the start/end addresses of each? I have listed the correct answers at the bottom...
The MSP430F2410 device has an address space of 64 KB (the basic MSP430 architecture). Fill in the table below if we know the following. The first 16 bytes of the address space (starting at the address 0x0000) is reserved for special function registers (IE1, IE2, IFG1, IFG2, etc.), the next 240 bytes is reserved for 8-bit peripheral devices, and the next 256 bytes is reserved for 16-bit peripheral devices. The RAM memory capacity is 2 Kbytes and it starts at the address 0x1100. At the top of the address space is 56KB of flash memory reserved for code and interrupt vector table.
What Start Address End Address
Special Function Registers (16 bytes) 0x0000 0x000F
8-bit peripheral devices (240 bytes) 0x0010 0x00FF
16-bit peripheral devices (256 bytes) 0x0100 0x01FF
RAM memory (2 Kbytes) 0x1100 0x18FF
Flash Memory (56 Kbytes) 0x2000 0xFFFF
For starters, don't get thrown off by what's stored in each segment - that's only going to confuse you. The problem is just asking you to figure out the hex numbering, and that's not too difficult. Here are the requirements:
64 KB total memory
The first 16 bytes of the address space (starting at the address 0x0000) is reserved for special function registers (IE1, IE2, IFG1, IFG2, etc.)
The next 240 bytes is reserved for 8-bit peripheral devices
The next 256 bytes is reserved for 16-bit peripheral devices
The RAM memory capacity is 2 Kbytes and it starts at the address 0x1100
At the top of the address space is 56KB of flash memory reserved for code and interrupt vector table.
Since each hex digit in your memory address can handle 16 values (0-F), you'll need 4 digits to display 64KB of memory (16 ^ 4 = 65536, or 64K).
You start with 16 bytes, and that covers 0x0000 - 0x000F (one full digit of your address). That means that the next segment, which starts immediately after it (8-bit devices), begins at 0x0010 (the next byte), and since it's 240 bytes long, it ends at byte 256 (240 + 16), or 0x00FF.
The next segment (16-bit devices) starts at the next byte, which is 0x0100, and is 256 bytes long - that puts the end at 0x01FF.
Then comes 2KB (2048 bytes) of RAM, but it starts at 0x1100, as the description states, instead of immediately after the previous segment, so that's your starting address. Add 2048 to that, and you get 0x18FF.
The last segment covers the upper section of the memory, so you'll have to work backwards, You know it ends at 0xFFFF (the end of the available memory), and it's 56KB long. If you convert the 56KB to hex, it's 0xDFFF. If you imagine that this segment starts at 0, That leaves 2000 unused (0xE000-0xEFFF and 0xF000-0xFFFF), so you know that this segment has to start at 0x2000 to end at the upper end of the memory space.
I hope that's more clear, though when I read over it, I don't know that it's any help at all :( Maybe that's why I'll leave teaching that concept to somebody more qualified...
#define NUM_SIZES 5
uint16_t sizes[5] = {16, 240, 256, 2 * 1024, 56 * 1024};
uint16_t address = 0;
printf("Start End\n");
for (int i = 0; i < NUM_SIZES; i++)
{
printf("0x%04X 0x%04X\n", address, address + sizes[i] - 1);
address += sizes[i];
}

Resources