Verilog: Converting BCD (or binary) to BCH - verilog

I'm looking to code a BCD (or binary) to binary-coded hexadecimal which will be then converted to 7-segment display codes and sent serially to a latched shift register to drive the display. It's for a 16-bit microprocessor, that outputs signed 16-bit number.
I've already successfully coded and fully tested a binary to BCD converter using the shift and add 3 algorithm. The number is converted to positive if negative and a sign flag is set to notate sign. Most design example I saw on the internet were combinatorial. I took a sequential approach to it, and takes around 35 clock cycles to do so.
My question is, is there a way to convert the BCD I have to BCH? Or perhaps it would be easier to convert the binary to BCH. Whichever way is more feasible. Performance is not an issue. Is there an existing algorithm to do so?
I appreciate your responses.

You should just use a look up table. Have an input to your case statement be your BCD digit, and the output be your BCH digit. Both will be 4 bits wide guaranteed, so you can parse your BCD digits one at a time and each one will produce a 4 bit output.
Converting from Binary to BCD is harder because you need to use a double dabble algorithm (as you have found out). But once it's in BCD you shouldn't have a problem going to BCH.

Related

Encoding binary strings into arbitrary alphabets

If you have a set of binary strings that are limited to some normally-small size such as 256 or up to 512 bytes like some of the hashing algorithms, then if you want to encode those bits of 1's and 0's into say hex (a 16-character alphabet), then you take the whole string at once into memory and convert it into hex. At least that's what I think it means.
I don't have this question fully formulated, but what I'm wondering is if you can convert an arbitrarily long binary string into some alphabet, without needing to read the whole string into memory. The reason this isn't fully formed question is because I'm not exactly sure if you typically do read the whole string into memory to create the encoded version.
So if you have something like this:
1011101010011011011101010011011011101010011011110011011110110110111101001100101010010100100000010111101110101001101101110101001101101110101001101111001101111011011011110100110010101001010010000001011110111010100110110111010100110110111010100110111100111011101010011011011101010011011011101010100101010010100100000010111101110101001101101110101001101101111010011011110011011110110110111101001100101010010100100000010111101110101001101101101101101101101111010100110110111010100110110111010100110111100110111101101101111010011001010100101001000000101111011101010011011011101010011011011101010011011110011011110110110111101001100 ... 10^50 longer
Something like the whole genetic code or a million billion times that, it would be too large to read into memory and too slow to wait to dynamically create an encoding of it into hex if you have to stream the whole thing through memory before you can figure out the final encoding.
So I'm wondering three things:
If you do have to read something fully in order to encode it into some other alphabet.
If you do, then why that is the case.
If you don't, then how it works.
The reason I'm asking is because looking at a string like 1010101, if I were to encode it as hex there are a few ways:
One character at a time, so it would essentially stay 1010101 unless the alphabet was {a, b} then it would be abababa. This is the best case because you don't have to read anything more than 1 character into memory to figure out the encoding. But it limits you to a 2-character alphabet. (Anything more than 2 character alphabets and I start getting confused)
By turning it into an integer, then converting that into a hex value. But this would require reading the whole value to compute the final (big)integer size. So that's where I get confused.
I feel like the third way (3) would be to read partial chunks of the input bytes somehow, like 1010 then010, but that would not work if the encoding was integers because 1010 010 = A 2 in hex, but 2 = 10 not 2 = 010. So it's like you would need to break it by having a 1 at the beginning of each chunk. But then what if you wanted to have each chunk no longer than 10 hex characters, but you have a long string of 1000 0's, then you need some other trick perhaps like having the encoded hex value tell you how many preceding zeroes you have, etc. So it seems like it gets complicated, wondering if there are already some systems established that have figured out how to do this. Hence the above questions.
For an example, say I wanted to encode the above binary string into an 8-bit alphabet, so like ASCII. Then I might have aBc?D4*&((!.... But then to deserialize this into the bits is one part, and to serialize the bits into this is another (these characters aren't the actual characters mapped to the above bit example).
But then what if you wanted to have each chunk no longer than 10 hex characters, but you have a long string of 1000 0's, then you need some other trick perhaps like having the encoded hex value tell you how many preceding zeroes you have, etc. So it seems like it gets complicated, wondering if there are already some systems established that have figured out how to do this
Yes you're way over-complicating it. To start simple, consider bit strings whose length is by definition a multiple of 4. They can be represented in hexadecimal by just grouping the bits up by 4 and remapping that to hexadecimal digits:
raw: 11011110101011011011111011101111
group: 1101 1110 1010 1101 1011 1110 1110 1111
remap: D E A D B E E F
So 11011110101011011011111011101111 -> DEADBEEF. That all the nibbles had their top bit set was a coincidence resulting from choosing an example that way. By definition the input is divided up into groups of four, and every hexadecimal digit is later decoded to a group of four bits, including leading zeroes if applicable. This is all that you need for typical hash codes which have a multiple of 4 bits.
The problems start when we want encode bit strings that are of variable length and not necessarily a multiple of 4 long, then there will have to be some padding somewhere, and the decoder needs to know how much padding there was (and where, but the location is a convention that you choose). This is why your example seemed so ambiguous: it is. Extra information needs to be added to tell the decoder how many bits to discard.
For example, leaving aside the mechanism that transmits the number of padding bits, we could encode 1010101 as A5 or AA or 5A (and more!) depending on the location we choose for the padding, whichever convention we choose the decoder needs to know that there is 1 bit of padding. To put that back in terms of bits, 1010101 could be encoded as any of these:
x101 0101
101x 0101
1010 x101
1010 101x
Where x marks the bit which is inserted in the encoder and discarded in the decoder. The value of that bit doesn't actually matter because it is discarded, so DA is also a fine encoding and so on.
All of the choices of where to put the padding still enable the bit string to be encoded incrementally, without storing the whole bit string in memory, though putting the padding in the first hexadecimal digit requires knowing the length of the bit string up front.
If you are asking this in the context of Huffman coding, you wouldn't want to calculate the length of the bit string in advance so the padding has to go at the end. Often an extra symbol is added to the alphabet that signals the end of the stream, which usually makes it unnecessary to explicitly store how much padding bits there are (there might be any number of them, but as they appear after the STOP symbol, the decoder automatically disregards them).

hexadecimal seven segment display verilog

I've taken a project using verilog. We have two 4-bits number, a multiplexer(S0,S1) and four module(adder,substractor,and,xor). Output is 4 bit. I think it seems simple alu. I have written a verilog code that contains all of them as modules. I have assigned pins to DE0 board. As you can see, the output can be seen on leds. There is no problem about that. But, how the output can be displayed on Seven Segment Display instead of LEDs? However, the result should be hexadecimal instead of binary. I have also pins about seven segment display, so I think I will implement them like the leds. I'm new about verilog. It will be my first program.
If S is 0 (00), adder result will be seen on LEDs.
If S is 1 (01), subtraction result will be seen on LEDs.
If S is 2 (10), AND operation result will be seen on LEDs.
If S is 3 (11), XOR operation result will be seen on LEDs.
A seven segment display is really just 7 LEDs in a figure 8 pattern (with a 8th LED for the decimal point). As such, you need only drive the pin attached to that segment of the display LOW to light up the segment (See the handbook on the DE0, Section 4.3, for the pins attached to the seven segment display: ftp://ftp.altera.com/up/pub/Altera_Material/Boards/DE0/DE0_User_Manual.pdf ). Now, this means you have control over each segment but just driving your 4 bit number into the display wont product something you can easily read. For this, youre going to need a converter from your 4bit value into a 7bit values representing the pattern to light the seven segment up (Ie, if your output is 4'b0001, you want your seven segment to display a 1, so you need to convert the 4'b0001 into a 7bit value that will result in a 1 being displayed). I think this is a worthy design challenge for you to take on as you learn Verilog, so I will not provide code for a 4-bit to seven segment display module. But if you run into any issues creating your own; feel free to comment to this answer and we can try and help. Or make a new question if its a big issue.

Fortran 77: output floats with variable widths

I need to output lots of (>20 million) float values to a text file from a Fortran 77 program. I'd like to keep the output file as small as possible. Therefore I would like to output the floats in a compact way, without resorting to binary.
I know the precision I need (usually two digits right of the decimal point), so in C I would use printf("%.2f %.2f", val1, val2); Is something like this possible in Fortran 77? All I found was that I have to set the field width explicitly (like in format (f8.2,x,f8.2)). This wastes lots of space, when I don't know the range of the output numbers beforehand.
If it is not possible in Fortran 77, do newer Fortran standards offer a way to do this?
The Fortran 2008 standard allows an edit descriptor such as f0.2 in response to which the output is the smallest possible field width which writes the whole part of the number followed by a decimal point and two fractional digits. I think that this has been part of the language standard since Fortran 90, possibly longer.
If you have a number, X, then INT(LOG10(X))+1 is the size of the integer part of your number (number of digits of the integer part). So, you just have to make some custom FORMAT labels for each of the values you want to print.
It is not very elegant, but I think it will help you achieve what you want.
I know this might come across as pedantic and unhelpful, but hear me out. It sounds like you are doing bad science. If your instrument is spitting out numbers from 1000.00 to 0.01, then your instrument is probably only accurate to one part in a hundred. So the number 9894.36 ought to be rounded to 9900 (no decimal point). All the other digits are not significant. Why is that relevant and helpful? Because you are wasting storage space if you are storing 9894.36. So, the answer is to use the g edit descriptor, which outputs in scientific notation. Then all of your numbers will take up the same space.

scale 14 bit word to an 8 bit word

I'm working on a project where I sample a signal with an ADC, that represents values as 14 bit words. I need to scale the values to 8 bit words. What's a good way to go about this in general. By the way, I'm using an FPGA so I'd like to do it in "hardware" rather than a software solution. Also in case you're wondering the chain of events will be: sample analog signal, represent sample value with 14 bit word, scale 14 bit word to 8 bit word, transmit 8 bit word with UART to PC COM1.
I've never done this before. I was assuming you use quantization levels, but I'm not sure what an efficient circuit for this operation would be. Any help would be appreciated.
Thanks
You just need an add and a shift:
val_8 = (val_14 + 32) >> 6;
(The + 32 is necessary to get correct rounding - you can omit it but you will get more truncation noise in your signal if you do.)
I think you just drop the six lowest resolution bits and call it good, right? But I might not fully understand the problem statement.
Paul's algorithm is correct, but you'll need some bounds checking.
assign val_8 = (&val_14[13:5]) ? //Make sure your sum won't overflow
8'hFF : //Assign all 1's if it will
val_14[13:6] + val_14[5];

What precisely is the "silence" value in the SDL audio API?

In SDL, when you set up your audio output device, you and SDL have to agree on an audio format - e.g. 44.1KHz stereo 16-bit signed little-endian. That's fine. But along with the final agreed format, you also get a computed "silence" value which doesn't seem well documented.
A silent sound sample obviously consist of the same sample value repeated over and over again, and you want that to be at the "zero" level. In a sense any constant value will do, but you have to agree a value (so you don't get pops when switching to a different sound), and in a sane world you want to choose a value bang in the centre of your sample-value range.
So if you happen to use an unsigned format for your sample value range for 0..whatever, your silence value will be (whatever/2).
EDIT - inserted "unsigned" below to avoid confusion.
That's all fine. But the silence value you get given is an unsigned 8-bit integer. That doesn't work very well if you want unsigned 16 bit samples - the logical silence value of 0x8000 requires two different byte values and it requires them to be in the correct endian order.
So the silence value you get from SDL doesn't seem to make much sense. You can't use it to wipe your buffers, for instance, without dealing with extra complications and making inferences which pretty much make the precalculated silence value pointless anyway.
Which means, of course, that I've misunderstood the point.
So - if this isn't how the silence value is meant to be used, how should it be used?
I have no evidence to back this up but I think the assumption here is that "silence" could be interpreted as "silence for common soundcard formats". Those being:
Unsigned 8-bit integers
Signed 16-bit integers
Signed 32-bit integers (for 24-bit audio data)
Normalized 32-bit floating point
Normalized 64-bit floating point.
In all the cases except for unsigned 8-bit, zero (0) is the "zero amplitude" value. So the returned unsigned 8-bit integer contains all the possible values of "zero amplitude" for these formats.

Resources