Anyone who ever had to draw text in a graphics application for pre-windows operating systems (i.e. Dos) will know what I'm asking for.
Each ASCII character can be represented by an 8x8 pixel matrix. Each matrix can be represented by an 8 byte code (each byte used as a bit mask for each line of the matrix, 1 bit representing a white pixel, each 0 a black pixel).
Does anyone know where I can find the byte codes for the basic ASCII characters?
Thanks,
BW
Would this do?
Hope this helps.
There are some good ones here; maybe not 8x8, but still easy parse
5 x 7 typeface would cost less space than 8 x 8.
Do you need any characters that are missing from this?
Self answering because user P i hasn't (they posted it in a comment on the question).
This github repo is exactly what I was looking for
dhepper/font8x8
From the read me . . .
8x8 monochrome bitmap font for rendering
A collection of header files containing a 8x8 bitmap font.
font8x8.h contains all available characters
font8x8_basic.h contains unicode points U+0000 - U+007F
font8x8_latin.h contains unicode points U+0000 - U+00FF
Author: Daniel Hepper daniel#hepper.net
License: Public Domain
Encoding
Every character in the font is encoded row-wise in 8 bytes.
The least significant bit of each byte corresponds to the first pixel in a
row.
The character 'A' (0x41 / 65) is encoded as
{ 0x0C, 0x1E, 0x33, 0x33, 0x3F, 0x33, 0x33, 0x00}
0x0C => 0000 1100 => ..XX....
0X1E => 0001 1110 => .XXXX...
0x33 => 0011 0011 => XX..XX..
0x33 => 0011 0011 => XX..XX..
0x3F => 0011 1111 => xxxxxx..
0x33 => 0011 0011 => XX..XX..
0x33 => 0011 0011 => XX..XX..
0x00 => 0000 0000 => ........
To access the nth pixel in a row, right-shift by n.
. . X X . . . .
| | | | | | | |
(0x0C >> 0) & 1 == 0-+ | | | | | | |
(0x0C >> 1) & 1 == 0---+ | | | | | |
(0x0C >> 2) & 1 == 1-----+ | | | | |
(0x0C >> 3) & 1 == 1-------+ | | | |
(0x0C >> 4) & 1 == 0---------+ | | |
(0x0C >> 5) & 1 == 0-----------+ | |
(0x0C >> 6) & 1 == 0-------------+ |
(0x0C >> 7) & 1 == 0---------------+
It depends on the font. Search Google for 8x8 pixel fonts and you'll find a lot of different ones.
Converting from an image to a byte code table is trivial. Loop through each image 8x8 block at at time, reading the pixels and setting the bytes.
http://cone3d.gamedev.net/cone3d/gfxsdl/tut4-2.gif
you could parse/process this bitmap and get the byte matrixes (matrices?) from this
Related
I'm building a macro keyboard and one of the functions I'm trying to implement is Ctrl+Shift+R, but in the definitions, only one modifier exists in the fixed 8 byte string. How do I implement additional modifiers?
USB keyboards almost always have a HID Report Descriptor that defines each inbound keyboard report as follows:
Bit: 7 6 5 4 3 2 1 0
+---+---+---+---+---+---+---+---+
Byte 0 | RG| RA| RS| RC| LG| LA| LS| LC| Modifier bits (LC=Left Control, LS= Left Shift, etc)
+---+---+---+---+---+---+---+---+
Byte 1 | Reserved byte |
+---+---+---+---+---+---+---+---+
Byte 2 | Key 1 |
+---+---+---+---+---+---+---+---+
Byte 3 | Key 2 |
+---+---+---+---+---+---+---+---+
Byte 4 | Key 3 |
+---+---+---+---+---+---+---+---+
Byte 5 | Key 4 |
+---+---+---+---+---+---+---+---+
Byte 6 | Key 5 |
+---+---+---+---+---+---+---+---+
Byte 7 | Key 6 |
+---+---+---+---+---+---+---+---+
Each modifier key is represented as a single bit in byte 0. To indicate that multiple modifier keys are pressed you would "or" the values together. You could code something like:
#define MOD_LEFT_CONTROL 0b00000001
#define MOD_LEFT_SHIFT 0b00000010
#define MOD_LEFT_ALT 0b00000100
.
.
#define KEY_R 0x15
.
.
modifiers = MOD_LEFT_CONTROL | MOD_LEFT_SHIFT;
reserved = 0;
key[0] = KEY_R;
It is possible to define a HID Report Descriptor that allows modifier key usages to be included in the 6-byte key array but there is usually no need to do that - and the above scheme uses less space anyway.
I was working on a problem for converting base64 to hex and the problem prompt said as an example:
3q2+7w== should produce deadbeef
But if I do that manually, using the base64 digit set ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/ I get:
3 110111
q 101010
2 110110
+ 111110
7 111011
w 110000
As a binary string:
110111 101010 110110 111110 111011 110000
grouped into fours:
1101 1110 1010 1101 1011 1110 1110 1111 0000
to hex
d e a d b e e f 0
So shouldn't it be deadbeef0 and not deadbeef? Or am I missing something here?
Base64 is meant to encode bytes (8 bit).
Your base64 string has 6 characters plus 2 padding chars (=), so you could theoretically encode 6*6bits = 36 bits, which would equal 9 4bit hex numbers. But in fact you must think in bytes and then you only have 4 bytes (32 bits) of significant information. The remaining 4 bits (the extra '0') must be ignored.
You can calculate the number of insignificant bits as:
y : insignificant bits
x : number of base64 characters (without padding)
y = (x*6) mod 8
So in your case:
y = (6*6) mod 8 = 4
So you have 4 insignificant bit on the end that you need to ignore.
I am a bit lost on understanding the implementation of a specific command.
In this example, there is a command passed 0x00c6ba23 which is 0000 0000 1100 0110 1011 1010 0010 0011 in binary
I am attempting to find the ALU control unit’s inputs for this instruction.
From this I can see
opcode = 0100011
imm[4:0] = 10100
funct3 = 011 (incorrect...)
rs1 = 01101
rs2 = 01100
imm[11:5] = 0000000
I am using this image to decode it
My question is how do I get the ALU control bits and ALUOp control bits for this function? And why is the function SD, even though the funct 3 is showing 011 instead of 111?
... why is the function SD, even though the funct 3 is showing 011 instead of 111?
011 is correct. The funct3 bits must be 011 in order for this to be an SD instruction. According to page 105 of https://content.riscv.org/wp-content/uploads/2017/05/riscv-spec-v2.2.pdf the SD instruction has the format:
| imm[11:5] | rs2 | rs1 | 011 | imm[4:0] | 0100011 |
If the funct3 bits were 111 then this instruction would not be SD.
... how do I get the ALU control bits and ALUOp control bits for this function?
Since this is an SD instruction, you can read those bits straight out of the SD line of the lower table in the diagram that you referenced in your question.
Question
When faced with signed hexadecimal numbers of unknown length, how can one use Excel formulas to easily convert those hexadecimal numbers to decimal numbers?
Example
Hex
---
00
FF
FE
FD
0A
0B
Use this deeply nested formula:
=HEX2DEC(N)-IF(ISERR(FIND(LEFT(IF(ISEVEN(LEN(N)),N,CONCAT(0,N))),"01234567")),16^LEN(IF(ISEVEN(LEN(N)),N,CONCAT(0,N))),0)
where N is a cell containing hexadecimal data.
This formula becomes more readable when expanded:
=HEX2DEC(N) -
/* check if sign bit is present in leftmost nibble, padding to an even number of digits if necessary */
IF( ISERR( FIND( LEFT( IF( ISEVEN(LEN(N))
, N
, CONCAT(0,N)
)
)
, "01234567"
)
)
/* offset if sign bit is present */
, 16^LEN( IF( ISEVEN(LEN(N))
, N
, CONCAT(0,N)
)
)
/* do not offset if sign bit is absent */
, 0
)
and may be read as "First, convert the hexadecimal value to an unsigned decimal value. Then offset the unsigned decimal value if the leftmost nibble of the data contains a sign bit; else do not offset."
Example Conversion
Hex | Dec
-----|----
00 | 0
FF | -1
FE | -2
FD | -3
0A | 10
0B | 11
Let the A1 cell contain a 1 byte hexadecimal string of any case.
To get the 2's complement decimal value of this string, use the following:
=HEX2DEC(A1)-IF(HEX2DEC(A1) > 127, 256, 0)
For an arbitrary length of bytes, use the following:
=HEX2DEC(A1) - IF(HEX2DEC(A1) > POWER(2, 4*LEN(A1))/2 - 1, POWER(2, 4*LEN(A1)), 0)
I usually use MOD function, but it needs addition and substraction of half the max value. For an 8-bit hex number:
=MOD(HEX2DEC(A1) + 2^7, 2^8) - 2^7
Of course it can be made a generic formula based on length:
=MOD(HEX2DEC(A1) + 2^(4*LEN(A1)-1), 2^(4*LEN(A1))) - 2^(4*LEN(A1)-1)
But sometimes input value has lost leading zeroes or maybe you are using hex values of an arbitrary length (I usually have to decode registers from microcontrollers where maybe a 16-bit register has been used for 3 signed values). I prefer keeping bit length in a separate column:
=MOD(HEX2DEC(A1) + 2^(B1-1), 2^(B1)) - 2^(B1-1)
Example conversion
HEX | bit # | Dec
-----|-------|------
0 | 8 | 0
FF | 8 | -1
FF | 16 | 255
FFFE | 16 | -2
2FF | 10 | -257
This question is derived from my previous SO question's commends.
I am confused with PLC's interpretation of BCD and decimal.
In a PLC documentation, it somehow implies BCD = decimal:
The instruction reads the content of D300, 0100, as BCD. Referring to Cyber Slueth Omega's answer and online BCD-Hex converter, 0100 (BCD) = 4 (Decimal) = 4 (Hex), but the documentation indicates 0100 (BCD) = 100 (Decimal).
Why?
BCD is HEX
BCD is not binary
HEX is not binary
BCD and HEX are representations of binary information.
The only difference is in how you decide to interpret the numbers. Some PLC instructions will take a piece of word memory and will tell you that "I, the TIM instruction, promise to treat the raw data in D300 as BCD data". It is still HEX data, but it interprets it differently.
If D300 = [x2486] --> the timer (as example) will wait 248.6 seconds. This even though HEX 2486 = 9350 decimal. You can treat hex data as anything. If you treat hex data as encoded BCD you get one answer. If you treat it as a plain unsigned binary number you get another, etc.
If D300 = [x1A3D] --> TIM will throw an error flag because D300 contains non-BCD hex digits
Further, the above example is showing HEX digits - not BINARY digits. It is confusing because they chose [x0100] as their example - only zeroes and ones. When you are plugging this into your online converter you are doing it wrong - you are converting binary 0100 to decimal 4. Hexadecimal is not binary - hex is a base16 representation of binary.
Anatomy of a D-memory location is this
16 Bits | xxxx | xxxx | xxxx | xxxx | /BINARY/
---> | | | |
4 bits/digit D4 D3 D2 D1 /HEX/
example
D300 = 1234 | 0001 | 0010 | 0011 | 0100 |
----> 1 2 3 4
example
D300 = 2F6B | 0010 | 1111 | 0110 | 1011 |
----> 2 F 6 B
example (OP!)
D300 = 0100 | 0000 | 0001 | 0000 | 0000 |
----> 0 1 0 0
A D-memory location can store values from x0000 -> xFFFF (decimal 0-65535). A D-memory location which is used to store BCD values, however, can only use decimal digits. A->F are not allowed. This reduces the range of a 16-bit memory location to 0000->9999.
Counting up you would go :
Decimal BCD HEX
1 0001 0001
2 0002 0002
3 0003 0003
4 0004 0004
5 0005 0005
6 0006 0006
7 0007 0007
8 0008 0008
9 0009 0009
10 0010 000A
11 0011 000B
12 0012 000C
13 0013 000D
14 0014 000E
15 0015 000F
16 0016 0010
17 0017 0011
18 0018 0012
19 0019 0013
20 0020 0014
...etc
Going the other way, if you wish to pass a decimal value to a memory location and have it stored as pure hex (not BCD hex!) you use the '&' symbol.
For example -> [MOV #123 D300]
This moves HEX value x0123 to memory location D300. If you use D300 in a future operation which interprets this as a hexadecimal number then it will have a decimal value of 291. If you use it in an instruction which interprets it as a BCD value then it will have a decimal value of 123.
If instead you do [MOV &123 D300]
This moves the decimal value 123 to D300 and stores it as a hexadecimal number -> [x007B]! If you use D300 now in a future operation which interprets this as a hexadecimal number it will have a decimal value of 123. If you try to use it in an instruction which interprets it as a BCD value you will get an ERROR because [x007B] contains the hex digit 'B' which is not a valid BCD digit.
Binary-coded decimal is encoded as hex digits that have a limited range of 0-9. This means that 0x0100 should be read as 100 when BCD is meant. Numbers with hexadecimal digits from A to F are not valid BCD numbers.