When I run make run-asm-tests in the emulator directory of rocket-chip, I get a bunch of *.out files in the emulator/output directory. These appear to be instruction traces but the columns are not labeled. I was wondering what each of these columns means. Thanks!
For Example:
C0: 82212 [0] pc=[000000081c] W[r 0=0000000000000400][0] R[r 8=0000000000000000] R[r 0=0000000000000000] inst=[40044403] lbu s0, 1024(s0)
C0: 82213 [0] pc=[000000081c] W[r 0=0000000000000400][0] R[r 8=0000000000000000] R[r 0=0000000000000000] inst=[40044403] lbu s0, 1024(s0)
C0: 82214 [1] pc=[0000000820] W[r 8=0000000000000000][1] R[r 8=0000000000000000] R[r 3=0000000000000003] inst=[00347413] andi s0, s0, 3
C0: 82215 [1] pc=[0000000824] W[r 0=0000000000000000][0] R[r 8=0000000000000000] R[r 0=0000000000000000] inst=[fe0408e3] beqz s0, pc - 16
C0: 82216 [1] pc=[0000000814] W[r 8=0000000000000000][1] R[r 0=0000000000000000] R[r20=0000000000000003] inst=[f1402473] csrr s0, mhartid
The first column C0: stands for core 0. If you have multiple cores each will print its own trace prefixed with its hartid.
The second column 82212 through 82216 is the current cycle number.
The third column [0] or [1] shows whether this instruction committed this cycle (i.e. finished without exceptions).
The fourth column pc=[...] shows the current program counter value.
The fifth column W[r 8=...][1] shows the destination register of the current instruction in the suffix of the r, the value being written to that register after the =, and whether this write is happening in the [0] or [1].
The sixth column R[r 8=...] shows the the first source register's index in the suffix of the 'r and the value read from that register after the =.
The seventh column is the same as the sixth but for the second source register.
The eighth column inst=[...] shows the bits in the current instruction.
The ninth and final column shows a disassembly of the current instruction.
Related
I have learned that the MIFARE Classic authentication has a weakness about the parity bit. But I wonder how the reader sends the parity bit to the tag?
For example, here is the trace of a failed authentication attempt:
reader:26
tag: 02 00
reader:93 20
tag: c1 08 41 6a e2
reader:93 70 c1 08 41 6a e2 e4 7c
tag: 18 37 cd
reader:60 00 f5 7b
tag: ab cd 19 49
reader:59 d5 92 0f 15 b9 d5 53
tag: a //error code: 0x5
I know after the anti-collision, the tag will send NT(32-bit) as challenge to the reader, and the reader responds with the challenge {NR}(32-bit) and {AR}(32-bit). But I don't know where the 8-bit parity bit is in the above example. Which are the parity bits?
The example trace that you posted in your question either does not contain information about parity bits or all parity bits were valid (according to ISO/IEC 14443-3).
E.g. when the communication traces shows that the reader sends 60 00 f5 7b, the actual data that is sent over the RF interface would be (P is the parity bit):
b1 ... b8 P b1 ... b8 P b1 ... b8 P b1 ... b8 P
S 0000 0110 1 0000 0000 1 1010 1111 1 1101 1110 1 E
Parity bits are sent after each 8th byte (i.e. after each octet) and makes sure that all 9 bits contain an odd number of binary ones (odd parity). Therefore, it forms a 1-bit checksum over that byte. In your trace only bytes (but not the parity bits in between them) are shown.
The vulnerability regarding parity bits in MIFARE Classic is that parity bits are encrypted together with the actual data (cf. de Koning Gans, Hoepman,
and Garcia (2008): A Practical Attack on the MIFARE Classic, in CARDIS 2008, LNCS 5189, pp. 267-282, Springer).
Consequently, when you look at the communication trace without considering encryption, there may be parity errors according to the ISO/IEC 14443-3 parity calculation rule since the encrypted parity bit might not match the parity bit for the raw data stream. Tools like the Proxmark III would indicate such observed parity errors as exclamation marks ("!") after the corresponding bytes in the communication trace.
I have a production IIS7 server showing cpu pegged (90% or greater) at random times. I've captured a few memory dumps, but I am not very skilled with WinDBG to really understand what I am looking at.
!runaway
User Mode Time
Thread Time
41:8ec 0 days 0:36:13.109
42:de8 0 days 0:30:51.997
44:80c 0 days 0:10:13.177
45:e2c 0 days 0:09:41.758
46:154 0 days 0:08:25.677
47:8e4 0 days 0:06:46.179
25:230 0 days 0:01:06.144
Jumping over to thread 41, the Call site makes be believe (and after googling ufmanager) that Crystal Reports is hanging on to something. But I need more evidence to proceed.
0:041> k
# Child-SP RetAddr Call Site
00 00000000`0ed7f510 00000000`449910e1 u2lexch+0x1198
01 00000000`0ed7f530 00000000`44992a61 u2lexch+0x10e1
02 00000000`0ed7f570 00000000`44a32cd2 u2lexch!UFEndJob+0x11
03 00000000`0ed7f5a0 00000000`44a33311 ufmanager+0x2cd2
04 00000000`0ed7f610 00000000`0c7d6e45 ufmanager+0x3311
05 00000000`0ed7f640 00000000`0c7d3848 cslibu_3_0!CSLib300::CSThreadSafeDLL::UninitForThread+0x965
06 00000000`0ed7f670 00000000`76fb9bd1 cslibu_3_0!CSLib300::CSDLLServerThreadWnd::ServerWindowProc+0xd8
07 00000000`0ed7f6a0 00000000`76fb98da user32!UserCallWinProcCheckWow+0x1ad
08 00000000`0ed7f760 00000000`72a21922 user32!DispatchMessageWorker+0x3b5
09 00000000`0ed7f7e0 00000000`72a222a4 mfc80u!AfxInternalPumpMessage+0x52
0a 00000000`0ed7f810 00000000`72a217fd mfc80u!CWinThread::Run+0x70
0b 00000000`0ed7f850 00000000`74ae37d7 mfc80u!_AfxThreadEntry+0x131
0c 00000000`0ed7f950 00000000`74ae3894 msvcr80!endthreadex+0x47
0d 00000000`0ed7f980 00000000`770b652d msvcr80!endthreadex+0x104
0e 00000000`0ed7f9b0 00000000`771ec541 kernel32!BaseThreadInitThunk+0xd
0f 00000000`0ed7f9e0 00000000`00000000 ntdll!RtlUserThreadStart+0x1d
Can I get the report parameters, or other such information so I can "point my finger" at the cause?
Thank you in advanced
I'm trying to reverse engineer a binary file format, but it has no magic bytes, and no specific extension. I can influence only a single aspect of the file: a short string. By trying different strings, I was able to figure out how data is stored in the file. It seems that the whole file uses some sort of simple encoding. I hope that finding the exact encoding allows me to narrow down my search for the file format. I know the file is generated by a Windows program written in C++.
Now, after much trial-and-error, I found that some sections of the file are encoded in runs. Each run starts with a byte that indicates how many bytes will follow and where the data is retrieved.
000ddddd (1 byte)Take the following (ddddd)+1 bytes from the encoded data.
111····· ···ddddd ···bbbbb (3 bytes)Go back (bbbbb)+1 bytes in the decoded data, and take the next (ddddd)+9 bytes from it.
ddd····· ··bbbbbb (2 bytes)Go back (bbbbbb)+1 bytes in the decoded data, and take the next (ddd)+2 bytes from it.
Here's an example:
This is the start of the file, with the UTF-16 string abracadabra encoded in it:
. . . a . b . r . . c . . d . € .
0C 20 03 04 61 00 62 00 72 20 05 00 63 20 03 00 64 20 03 80 0D
To decode the string:
0C number of Unicode chars: 12 (11 chars + \0)
20 03 . . . ??
04 next 5
61 00 a .
62 00 b .
72 r
20 05 . a . back 6, take 3
00 next 1
63 c
20 03 . a . back 4, take 3
00 next 1
64 d
20 03 . a . back 4, take 3
80 0D b . r . a . back 14, take 6
This results in (UTF-16):
a . b . r . a . c . a . d . a . b . r . a .
61 00 62 00 72 00 61 00 63 00 61 00 64 00 61 00 62 00 72 00 61 00
However, I have no clue as to what encoding/compression algorithm this might be. It looks like some variant of LZ that doesn't use a dictionary (like LZ77), but so far I haven't been able to find any algorithm that matches this description. I'm also not sure whether the entire file is encoded like this, or only portions of it.
Do you know this encoding? Or do you have any hints for things I might look for in the file to identify the encoding?
After your edit I think it's LZF with the following differences to your observations:
The magic header and indication of compressed vs uncompressed has been removed in your example (not too surprising if it's embedded in a file).
You took the block length as one byte, but it is two bytes and big-endian, so the prior 0x00 is part of the length, which still works.
Could be NTFS compression, which is LZNT1. This idea is supported by the platform and the apparent 2-byte structure, along with the byte-alignment of the actual data.
The following elements are specific to this algorithm.
Chunks: Segments of data that are compressed, uncompressed, or that denote the end of the buffer.
Chunk header: The header for a compressed or uncompressed chunk of data.
Flag bytes: A bit flag whose bits, read from low order to high order, specify the formats of the data elements that follow. For example, bit 0 corresponds to the first data element, bit 1 to the second, and so on. If the bit corresponding to a data element is set, the element is a 2-byte compressed word; otherwise, it is a 1-byte literal value.
Flag group: A flag byte followed by zero or more data elements, each of which is a single literal byte or a 2-byte compressed word.
I'm developing a device driver for a Xilinx Virtex 6 PCIe custom board.
When doing DMA write (from host to device) here is what happens:
user space app:
a. fill buffer with the following byte pattern (tested up to 16kB)
00 00 .. 00 (64bytes)
01 01 .. 00 (64bytes)
...
ff ff .. ff (64bytes)
00 00 .. 00 (64bytes)
01 01 .. 00 (64bytes)
etc
b. call custom ioctl to pass pointer to buffer and size
kernel space:
a. retrieve buffer (bufp) with
copy_from_user(ptdev->kbuf, bufp, cnt)
b. setup and start DMA
b1. //setup physical address
iowrite32(cpu_to_be32((u32) ptdev->kbuf_dma_addr),
ptdev->region0 + TDO_DMA_HOST_ADDR);
b2. //setup transfer size
iowrite32(cpu_to_be32( ((cnt+3)/4)*4 ),
ptdev->region0 + TDO_DMA_BYTELEN);
b3. //memory barrier to make sure kbuf is in memorry
mb();
//start dma
b4. iowrite32(cpu_to_be32(TDO_DMA_H2A | TDO_DMA_BURST_FIXED | TDO_DMA_START),
ptdev->region0 + TDO_DMA_CTL_STAT);
c. put process to sleep
wait_res = wait_event_interruptible_timeout(ptdev->dma_queue,
!(tdo_dma_busy(ptdev, &dma_stat)),
timeout);
d. check wait_res result and dma status register and return
Note that the kernel buffer is allocated once at device probe with:
ptdev->kbuf = pci_alloc_consistent(dev, ptdev->kbuf_size, --512kB
&ptdev->kbuf_dma_addr);
device pcie TLP dump (obtained through logic analyzer after Xilinx core):
a. TLP received (by the device)
a1. 40000001 0000000F F7C04808 37900000 (MWr corresponds to b1 above)
a1. 40000001 0000000F F7C0480C 00000FF8 (MWr corresponds to b2 above)
a1. 40000001 0000000F F7C04800 00010011 (MWr corresponds to b4 above)
b. TLP sent (by the device)
b1. 00000080 010000FF 37900000 (MRd 80h DW # addr 37900000h)
b2. 00000080 010000FF 37900200 (MRd 80h DW # addr 37900200h)
b3. 00000080 010000FF 37900400 (MRd 80h DW # addr 37900400h)
b4. 00000080 010000FF 37900600 (MRd 80h DW # addr 37900600h)
...
c. TLP received (by the device)
c1. 4A000020 00000080 01000000 00 00 .. 00 01 01 .. 01 CplD 128B
c2. 4A000020 00000080 01000000 02 02 .. 02 03 03 .. 03 CplD 128B
c3. 4A000020 00000080 01000000 04 04 .. 04 05 05 .. 05 CplD 128B
c4. 4A000020 00000080 01000000 06 06 .. 0A 0A 0A .. 0A CplD 128B <=
c5. 4A000010 00000040 01000040 07 07 .. 07 CplD 64B <=
c6. 4A000010 00000040 01000040 0B 0B .. 0B CplD 64B <=
c7. 4A000020 00000080 01000000 08 08 .. 08 09 09 .. 09 CplD 128B <=
c8. 4A000020 00000080 01000000 0C 0C .. 0C 0D 0D .. 0D CplD 128B
.. the remaining bytes are transfered correctly and
the total number of bytes (FF8h) matches the requested size
signal interrupt
Now this apparent memory ordering error happens with high probality (0.8 < p < 1) and the ordering mismatch happens at different random points in the transfer.
EDIT: Note that the point c4 above would indicate that the memory is not filled in the right order by the kernel driver (I suppose the memory controller fills TLPs with contiguous memory). 64B being the cacheline size maybe this has something to do with cache operations.
When I disable cache on the kernel buffer with,
echo "base=0xaf180000 size=0x00008000 type=uncachable" > /proc/mtrr
the error still happens but much more seldom (p < 0.1 and depends on transfer size)
This only happens on a i7-4770 (Haswell) based machine (tested on 3 identical machine, with 3 boards).
I tried kernel 2.6.32 (RH6.5), stock 3.10.28, and stock 3.13.1 with the same results.
I tried the code and device in an i7-610 QM57 based machine and Xeon 5400 machine without any issues.
Any ideas/suggestions are welcome.
Best regards
Claudio
I know this is an old thread, but the reason for the "errors" is completion reordering. Multiple outstanding read requests don't have to be answered in order. Completions are only in order for the same request.
On top of that: there is always the same tag assigned to the requests, which is illegal if the requests are active at the same time.
In the example provided all MemRd TLP have the same TAG. You can't use the same TAG while you haven't received the last corresponding CplD with this TAG. So if you send MemRd, wait until you get CplD with this tag and fire MemRd again all your data will be in order (but in this case bus utilization will be low and you can't get high bandwidth occupation).
Also read this pci_alloc_consistent uncached memory. It doesn't like as a cache issue on your platform. I would better debug the device core.
QM57 supports PCIe 2.0
http://www.intel.com/Products/Notebook/Chipsets/QM57/qm57-overview.htm
whereas I imagine the mobo of i7-4770 machine supports PCIe 3.0
http://ark.intel.com/products/75122
I suspect there might be a kind of negotiation failure between PCIe 3.0 mobo and your V6 device (PCIe 2.0, too)
I have a binary file , the definition of its content is as below : ( all data is stored
in little endian (ie. least significant byte first)) . The example numbers below are HEX
11 63 39 46 --- Time, UTC in seconds since 1 Jan 1970.
01 00 --- 0001 = No Fix, 0002 = SPS
97 85 ff e0 7b db 4c 40 --- Latitude, as double
a1 d5 ce 56 8d 26 28 40 --- Longitude, as double
f0 37 e1 42 --- Height in meters, as float
fe 2b f0 3a --- Speed in km/h, as float
00 00 00 00 --- Heading (degrees ?), as float
01 00 --- RCR, log reason. 0001=Time, 0004=Distance
59 20 6a f3 4a 26 e3 3f --- Distance in meters, as double,
2a --- ? Don't know
a8 --- Checksum, xor of all bytes above not including 0x2a
the data from the Binary file "in HEX" is as below
"F25D39460200269652F5032445401F4228D79BCC54C09A3A2743B4ADE73F2A83"
I appreciate if you can support me to translate this data line based on the instruction before.
Probably wrong, but here's a shot at it using Ruby:
hex = "F25D39460200269652F5032445401F4228D79BCC54C09A3A2743B4ADE73F2A83"
ints = hex.scan(/../).map{ |s| s.to_i(16) }
raw = ints.pack('C*')
fields = raw.unpack( 'VvEEVVVvE')
p fields
#=> [1178164722, 2, 42.2813707974677, -83.1970117467067, 1126644378, 1072147892, nil, 33578, nil]
p Time.at( fields.first )
#=> 2007-05-02 21:58:42 -0600
I'd appreciate it if someone well-versed in #pack and #unpack would show me a better way to accomplish the first three lines.
My Cygnus Hex Editor could load such a file and, using structure templates, display the data in its native formats.
Beyond that, it's just a matter of doing through each value and working out the translation for each byte.