Implementation of a Command - riscv

I am a bit lost on understanding the implementation of a specific command.
In this example, there is a command passed 0x00c6ba23 which is 0000 0000 1100 0110 1011 1010 0010 0011 in binary
I am attempting to find the ALU control unit’s inputs for this instruction.
From this I can see
opcode = 0100011
imm[4:0] = 10100
funct3 = 011 (incorrect...)
rs1 = 01101
rs2 = 01100
imm[11:5] = 0000000
I am using this image to decode it
My question is how do I get the ALU control bits and ALUOp control bits for this function? And why is the function SD, even though the funct 3 is showing 011 instead of 111?

... why is the function SD, even though the funct 3 is showing 011 instead of 111?
011 is correct. The funct3 bits must be 011 in order for this to be an SD instruction. According to page 105 of https://content.riscv.org/wp-content/uploads/2017/05/riscv-spec-v2.2.pdf the SD instruction has the format:
| imm[11:5] | rs2 | rs1 | 011 | imm[4:0] | 0100011 |
If the funct3 bits were 111 then this instruction would not be SD.
... how do I get the ALU control bits and ALUOp control bits for this function?
Since this is an SD instruction, you can read those bits straight out of the SD line of the lower table in the diagram that you referenced in your question.

Related

How does case statement and assignment of values work in system-verilog/verilog?

I have a design module (a partially implemented seven segment display) with a case statement as shown below. However, it looks as if, if a case statement is not catered for a bcd value the previously assigned segment value is returned as the segment value for the bcd value which is not catered for in the switch statement.
Why is it behaving that way? Assuming I don't want to use a default statement.
I printed out the values of the bcd, segment and the expectedOutput and I observed what I just wrote above.
module seven_segment_display(output logic[6:0] segment, input logic[3:0] bcd);
always#(*)
begin
case (bcd)
4'b0011 : begin segment = 7'b1011011; end
4'b1000 : begin segment = 7'b1111011; end
4'b1010 : begin segment = 7'b0000000; end
4'b0000 : begin segment = 7'b1111110; end
endcase
end
endmodule
bcd segment expectedOutput
0000 1111110 1111110
0001 1111110 0110000
0010 1111110 1101101
0011 1011011 1111001
0100 1011011 0110011
0101 1011011 1011011
0110 1011011 1011111
0111 1011011 1110000
1000 1111011 1111111
1001 1111011 1111011
1010 0000000 0000000
1011 0000000 0000000
1100 0000000 0000000
1101 0000000 0000000
1110 0000000 0000000
1111 0000000 0000000
segment is a variable. Like in any other (software) language, variables remember their value until you overwrite their value with some other value.
Your first input (bcd) is 4'b0000. There is a branch of the case statement that matches that value and so the value of 7'b1111110 is assigned to the variable segment. Then you change the value of bcd to 4'b0001. There is no branch that matches that value, so no new value is assigned to the variable segment. So, it retains it's old value.

verilog subtraction does not yield carry out

I want to design an ALU to perform some operations on two 8bits register ( A , B ) and in order to detect carry_out, I defined a 9bits register as temp and put the results of operation on A,b in that register.
The MSb of that temp register is used as carry out.
Here is a part of my code:
module ALU(input signed [7:0] A, input [7:0] B, input carry_in, input [2:0] acode, output reg [7:0] R, output zero, output reg carry_out);
reg [8:0] temp;
reg [15:0] temp2;
always #(A, B, acode) begin
case(is_shift)
1'b0: begin
case(acode)
3'b000: temp = A + B;
3'b010: temp = A - B;
endcase
R = temp[7:0];
carry_out = temp[8];
Given A = 11100101 and B = 11000111, here is the log:
//addition
A: 11100101 , B: 11000111
acode: 000
R: 10101100
zero: 0, carry_out: 1
//subtraction
A: 11100101 , B: 11000111
acode: 010
R: 00011110
zero: 0, carry_out: 0
In both cases, the 9th bit of temp should be 1 and it's right in the addition case but in the subtraction case, the subtraction is right but the 9th bit of temp is not set to 1.
what is the problem here?
By the way: The effect of declaration of a register as signed is only in shifting and extending, yes? So this problem is not because of A being signed and B being unsigned , right?
The effect of declaration of a register as signed is only in shifting and extending
No, it effects all arithmetic. Although usually if you combine any unsigned or part select bus then it will default back to unsigned arithmetic.
You can not really have one input signed and one not, twos complement arithmetic will simply not work. You at least have to sign extend the signed value and insert a 0 MSB on to the unsigned, making sure it will be evaluated as positive.
Your first example is:
1110 0101 // -27
1100 0111 // -57
1 1010 1100 // -84 (-27 -57)
Second example (subtraction)
1110 0101 // -27
0011 1001 // +57
1 0001 1110 // 30 (ignoring MSB) -226 Including MSB
But note that the output is 1 bit wider, RTL does not give you access to the carry, but rather an extra sum, therefore the inputs are sign extended.
1 1110 0101 // -27
1 1100 0111 // -57
1 1010 1100 // -84
1 1110 0101 // -27
0 0011 1001 // +57
0 0001 1110 // 30
Note in the correctly sign extended subtraction the MSB is 0
But for your addition with the second value unsigned you need a 0 to show it is a positive number, and you will have bit growth of 1 bit:
1 1 1110 0101 // -27
0 0 1100 0111 // 199
0 0 1010 1100 // 172 (-27+199)
Here the extended bit (not a carry) is 0. not 1 as you predicted.

Why is BCD = Decimal in PLC?

This question is derived from my previous SO question's commends.
I am confused with PLC's interpretation of BCD and decimal.
In a PLC documentation, it somehow implies BCD = decimal:
The instruction reads the content of D300, 0100, as BCD. Referring to Cyber Slueth Omega's answer and online BCD-Hex converter, 0100 (BCD) = 4 (Decimal) = 4 (Hex), but the documentation indicates 0100 (BCD) = 100 (Decimal).
Why?
BCD is HEX
BCD is not binary
HEX is not binary
BCD and HEX are representations of binary information.
The only difference is in how you decide to interpret the numbers. Some PLC instructions will take a piece of word memory and will tell you that "I, the TIM instruction, promise to treat the raw data in D300 as BCD data". It is still HEX data, but it interprets it differently.
If D300 = [x2486] --> the timer (as example) will wait 248.6 seconds. This even though HEX 2486 = 9350 decimal. You can treat hex data as anything. If you treat hex data as encoded BCD you get one answer. If you treat it as a plain unsigned binary number you get another, etc.
If D300 = [x1A3D] --> TIM will throw an error flag because D300 contains non-BCD hex digits
Further, the above example is showing HEX digits - not BINARY digits. It is confusing because they chose [x0100] as their example - only zeroes and ones. When you are plugging this into your online converter you are doing it wrong - you are converting binary 0100 to decimal 4. Hexadecimal is not binary - hex is a base16 representation of binary.
Anatomy of a D-memory location is this
16 Bits | xxxx | xxxx | xxxx | xxxx | /BINARY/
---> | | | |
4 bits/digit D4 D3 D2 D1 /HEX/
example
D300 = 1234 | 0001 | 0010 | 0011 | 0100 |
----> 1 2 3 4
example
D300 = 2F6B | 0010 | 1111 | 0110 | 1011 |
----> 2 F 6 B
example (OP!)
D300 = 0100 | 0000 | 0001 | 0000 | 0000 |
----> 0 1 0 0
A D-memory location can store values from x0000 -> xFFFF (decimal 0-65535). A D-memory location which is used to store BCD values, however, can only use decimal digits. A->F are not allowed. This reduces the range of a 16-bit memory location to 0000->9999.
Counting up you would go :
Decimal BCD HEX
1 0001 0001
2 0002 0002
3 0003 0003
4 0004 0004
5 0005 0005
6 0006 0006
7 0007 0007
8 0008 0008
9 0009 0009
10 0010 000A
11 0011 000B
12 0012 000C
13 0013 000D
14 0014 000E
15 0015 000F
16 0016 0010
17 0017 0011
18 0018 0012
19 0019 0013
20 0020 0014
...etc
Going the other way, if you wish to pass a decimal value to a memory location and have it stored as pure hex (not BCD hex!) you use the '&' symbol.
For example -> [MOV #123 D300]
This moves HEX value x0123 to memory location D300. If you use D300 in a future operation which interprets this as a hexadecimal number then it will have a decimal value of 291. If you use it in an instruction which interprets it as a BCD value then it will have a decimal value of 123.
If instead you do [MOV &123 D300]
This moves the decimal value 123 to D300 and stores it as a hexadecimal number -> [x007B]! If you use D300 now in a future operation which interprets this as a hexadecimal number it will have a decimal value of 123. If you try to use it in an instruction which interprets it as a BCD value you will get an ERROR because [x007B] contains the hex digit 'B' which is not a valid BCD digit.
Binary-coded decimal is encoded as hex digits that have a limited range of 0-9. This means that 0x0100 should be read as 100 when BCD is meant. Numbers with hexadecimal digits from A to F are not valid BCD numbers.

Generating AC elements from jpeg file

I'm decoding jpeg file. I have generated huffman tables, and quantization tables, and I have reach the point where I have to decode DC and AC elements. For example lets say I have next data
FFDA 00 0C 03 01 00 02 11 03 11 00 3F 00 F2 A6 2A FD 54 C5 5F FFD9
If we ignore few bytes from SOS marker, my real data is starting from F2 byte. So lets write it in binary (starting from F2 byte):
1111 0010 1010 0110 0010 1010 1111 1101 0101 0100 1100 0101 0101 1111
F 2 A 6 2 A F D 5 4 C 5 5 F
When decoding, first element is luminance DC element so let's decode it.
[1111 0]010 1010 0110 0010 1010 1111 1101 0101 0100 1100 0101 0101 1111
F 2 A 6 2 A F D 5 4 C 5 5 F
So 11110 is Huffman code (in my case) for element 08. This means that next 8 bits are my DC value. When I take next 8 bits the value is:
1111 0[010 1010 0]110 0010 1010 1111 1101 0101 0100 1100 0101 0101 1111
F 2 A 6 2 A F D 5 4 C 5 5 F
DC element value is -171.
Here is my problem: next is luminance AC value, but I don't really understand standard in a case when is AC non zero? Tnx!
The DC values, as you've seen, are defined as the number of "extra" bits which specify the positive or negative DC value. The AC coefficients are encoded differently because most of them are 0. The Huffman table defines each entry for AC coefficients with a "skip" value and an "extra bits" length. The skip value is how many AC coefficients to skip before storing the value, and the extra bits are treated the same way as DC values. When decoding AC coefficients, you decode values from 1 to 63, but the way the encoding of the MCU ends can vary. You can have an actual value stored at index 63 or at if you're at index > 48, you could get a ZRL (zero run length = 16 zeros), or any combination which takes you past the end. A simplified decode loop:
void DecodeMCU(signed short *MCU)
{
int index;
unsigned short code, skip, extra;
MCU[0] = decodeDC();
index = 1;
while (index < 64)
{
code = decodeAC();
skip = code >> 4; // skip value
extra = code & 0xf; // extra bits
index += skip;
MCU[index++] = calcACValue(extra);
}
}
The color components can be interleaved (typical) or stored in separate scans. The elements are encoded in zigzag order in each MCU (low frequency elements first). The number of 8x8 blocks of coefficients which define an MCU varies depending on the color subsampling. For 1:1, there will be 1 Y followed by 1 Cr and 1 Cb. For typical digital camera images, the horizontal axis is subsampled, so you will get 2 Y blocks followed by 1 Cr and 1 Cb. The quality setting of the compressed image determines the quantization table used and how many zero AC coefficients are encoded. The lower the quality, the more of each MCU will be zeros. When you do the inverse DCT on your MCU, the number of zeros will determine how much detail is preserved in your 8x8, 16x8, 8x16 or 16x16 block of pixels. Here are the basic steps:
1) Entropy decode the 8x8 coefficient blocks, each color component is stored separately
2) De-zigzag and de-quantize the coefficients
3) Perform inverse DCT on the coefficients (might be 6 8x8 blocks for 4:2:0 subsampling)
4) Convert the colorspace from YCrCb to RGB or whatever you need

Byte codes for pixel maps for Ascii characters?

Anyone who ever had to draw text in a graphics application for pre-windows operating systems (i.e. Dos) will know what I'm asking for.
Each ASCII character can be represented by an 8x8 pixel matrix. Each matrix can be represented by an 8 byte code (each byte used as a bit mask for each line of the matrix, 1 bit representing a white pixel, each 0 a black pixel).
Does anyone know where I can find the byte codes for the basic ASCII characters?
Thanks,
BW
Would this do?
Hope this helps.
There are some good ones here; maybe not 8x8, but still easy parse
5 x 7 typeface would cost less space than 8 x 8.
Do you need any characters that are missing from this?
Self answering because user P i hasn't (they posted it in a comment on the question).
This github repo is exactly what I was looking for
dhepper/font8x8
From the read me . . .
8x8 monochrome bitmap font for rendering
A collection of header files containing a 8x8 bitmap font.
font8x8.h contains all available characters
font8x8_basic.h contains unicode points U+0000 - U+007F
font8x8_latin.h contains unicode points U+0000 - U+00FF
Author: Daniel Hepper daniel#hepper.net
License: Public Domain
Encoding
Every character in the font is encoded row-wise in 8 bytes.
The least significant bit of each byte corresponds to the first pixel in a
row.
The character 'A' (0x41 / 65) is encoded as
{ 0x0C, 0x1E, 0x33, 0x33, 0x3F, 0x33, 0x33, 0x00}
0x0C => 0000 1100 => ..XX....
0X1E => 0001 1110 => .XXXX...
0x33 => 0011 0011 => XX..XX..
0x33 => 0011 0011 => XX..XX..
0x3F => 0011 1111 => xxxxxx..
0x33 => 0011 0011 => XX..XX..
0x33 => 0011 0011 => XX..XX..
0x00 => 0000 0000 => ........
To access the nth pixel in a row, right-shift by n.
. . X X . . . .
| | | | | | | |
(0x0C >> 0) & 1 == 0-+ | | | | | | |
(0x0C >> 1) & 1 == 0---+ | | | | | |
(0x0C >> 2) & 1 == 1-----+ | | | | |
(0x0C >> 3) & 1 == 1-------+ | | | |
(0x0C >> 4) & 1 == 0---------+ | | |
(0x0C >> 5) & 1 == 0-----------+ | |
(0x0C >> 6) & 1 == 0-------------+ |
(0x0C >> 7) & 1 == 0---------------+
It depends on the font. Search Google for 8x8 pixel fonts and you'll find a lot of different ones.
Converting from an image to a byte code table is trivial. Loop through each image 8x8 block at at time, reading the pixels and setting the bytes.
http://cone3d.gamedev.net/cone3d/gfxsdl/tut4-2.gif
you could parse/process this bitmap and get the byte matrixes (matrices?) from this

Resources