Additional bit per sample for 2nd channel in FLAC__CHANNEL_ASSIGNMENT_MID_SIDE - audio

I'm trying to understand the following section (link) of libFLAC
case FLAC__CHANNEL_ASSIGNMENT_MID_SIDE:
FLAC__ASSERT(decoder->private_->frame.header.channels == 2);
if(channel == 1)
bps++;
Why are the warm-up samples for the 2nd channel stored with an additional bit-per-sample compare to the 1st channel? Also, I don't find any mention of this in the format spec - am I missing something?

I also asked on hydrogenaudio, and got an answer there: https://hydrogenaud.io/index.php?topic=121900.0
From the spec (not the one on xiph.org):
The side channel needs one extra bit of bit depth as the subtraction can produce sample values twice as large as the maximum possible in any given bit depth. The mid channel in mid-side stereo does not need one extra bit, as it is shifted left one bit. The left shift of the mid channel does not lead to non-lossless behavior, because an uneven sample in the mid subframe must always be accompanied by a corresponding uneven sample in the side subframe, which means the lost least significant bit can be restored by taking it from the sample in the side subframe.
And from user ktf:
Simply put: if you have 8-bit audio, and the left sample is +127 and the right sample -128, the diff channel becomes 127 - -128 = 255. So, you need 9 bits to code for that number.

Related

DEFLATE: dynamic block structure

I am trying to construct an explicit example of a dynamic block. Please let me know if this is wrong.
Considering this example of lit/len alphabet:
A(0), B(0), C(1), D(0), E(2), F(2), G(2), H(2)
and the rest of symbols having zero code lengths.
The sequence(SQ) of code lengths would be 0...,0,0,1,0,2,2,2,2,...0.
Then we have to compress it further with run-length encoding. So we have to calculate number of repetitions and either use flag 16 to copy the previous code length, or 17 or 18 to repeat code length 0 (using extra bits).
My problem is this. After sending the header information and the sequence of code-length code lengths in the right order 16,17,18,..., the next sequence of information would be something like:
18, some extra bits value,1,0,2,16, some extra bits value 0,18, some extra bits value. (Probably there would be another 18 flag since the maximum repeat count is 138.)
Then we have the same thing with the distance alphabet and finally the inputs data encoded with the canonical Huffman, and extra bits if necessary.
Is it necessary to send the code lengths of 0? If so, why?
If yes, why is it necessary to have hclit and hcdist and not only hclen, knowing that the lengths of the sequences are 286 for lit/len and 30 for distances?
If not what would be the real solution?
Another problem:
In this case we have code length 2 with repetitions (3) extra bits value of 0.
Is this last number also included in the code length tree construction?
If yes I can't understand how: flag 18 has next a maximum possible extra bits value of 127 (1111111) representing 138 repetitions and it couldn't be included into the alphabet symbols of 0-18.
P.S When I say extra bits in this case I mean the factor that it is used to know how many repetitions of the previous length are used.
More precisely 0 - 15 we have 0 bit factor repetition, for 16,17,18 we have 2,3,7 bits repetitions factor. The value of those bits is what I mean with extra bits value.
I think I'm missing something about what Huffman codes are generated by the Huffman code-length alphabet.
First off, your example code is invalid, since it oversubscribes the available bit patterns. 2,2,2,2 would use all of the bit patterns, since there are only four possible two-bit patterns. You can't have one more code, of any length. Possible valid codes for five symbols is 1,2,3,4,4, or 2,2,2,3,3.
To answer your questions in order:
You need to send the leading zeros, but you do not need to send the trailing zeros. The HLIT and HDIST counts determine how many lengths of each code type are in the header, where any after those are taken to be zero. You need to send the zeros, since the code lengths are associated with the corresponding symbol by their position in the list.
It saves space in header to have the HLIT and HDIST counts, so that you don't need to provide lengths for all 316 codes in every header.
I don't understand this question, but I guess it doesn't apply.
If I understand your question, extra bits have nothing to do with the descriptions of the Huffman codes in the headers. The extra bits are implied by the symbol. In any case, a repeated length is encoded with code 16, not code 18. So the four twos would be encoded as 2, 16(0), where the (0) represents two extra bits that are zeros.

Bitwise operations Python

This is a first run-in with not only bitwise ops in python, but also strange (to me) syntax.
for i in range(2**len(set_)//2):
parts = [set(), set()]
for item in set_:
parts[i&1].add(item)
i >>= 1
For context, set_ is just a list of 4 letters.
There's a bit to unpack here. First, I've never seen [set(), set()]. I must be using the wrong keywords, as I couldn't find it in the docs. It looks like it creates a matrix in pythontutor, but I cannot say for certain. Second, while parts[i&1] is a slicing operation, I'm not entirely sure why a bitwise operation is required. For example, 0&1 should be 1 and 1&1 should be 0 (carry the one), so binary 10 (or 2 in decimal)? Finally, the last bitwise operation is completely bewildering. I believe a right shift is the same as dividing by two (I hope), but why i>>=1? I don't know how to interpret that. Any guidance would be sincerely appreciated.
[set(), set()] creates a list consisting of two empty sets.
0&1 is 0, 1&1 is 1. There is no carry in bitwise operations. parts[i&1] therefore refers to the first set when i is even, the second when i is odd.
i >>= 1 shifts right by one bit (which is indeed the same as dividing by two), then assigns the result back to i. It's the same basic concept as using i += 1 to increment a variable.
The effect of the inner loop is to partition the elements of _set into two subsets, based on the bits of i. If the limit in the outer loop had been simply 2 ** len(_set), the code would generate every possible such partitioning. But since that limit was divided by two, only half of the possible partitions get generated - I couldn't guess what the point of that might be, without more context.
I've never seen [set(), set()]
This isn't anything interesting, just a list with two new sets in it. So you have seen it, because it's not new syntax. Just a list and constructors.
parts[i&1]
This tests the least significant bit of i and selects either parts[0] (if the lsb was 0) or parts[1] (if the lsb was 1). Nothing fancy like slicing, just plain old indexing into a list. The thing you get out is a set, .add(item) does the obvious thing: adds something to whichever set was selected.
but why i>>=1? I don't know how to interpret that
Take the bits in i and move them one position to the right, dropping the old lsb, and keeping the sign. Sort of like this
Except of course that in Python you have arbitrary-precision integers, so it's however long it needs to be instead of 8 bits.
For positive numbers, the part about copying the sign is irrelevant.
You can think of right shift by 1 as a flooring division by 2 (this is different from truncation, negative numbers are rounded towards negative infinity, eg -1 >> 1 = -1), but that interpretation is usually more complicated to reason about.
Anyway, the way it is used here is just a way to loop through the bits of i, testing them one by one from low to high, but instead of changing which bit it tests it moves the bit it wants to test into the same position every time.

Discontinuity in results from Excel's Present Value function PV

The PV function of Excel (present value of an investment) seems to make a sudden jump somewhere and is raising an error. For example:
WorksheetFunction.PV(-19, 240, 500000)
will return a value while:
WorksheetFunction.PV(-20, 240, 500000)
will raise an exception. There are other examples where things go wrong. Independently of the semantic meaning of the parameters, the formula to calculate the PV is purely continuous around the jump, so what is causing this?
When calculating things in, for instance, Mathematica this does not happen.
The first term of the formula used to calculate PV is PV * (1+rate) ^ nper (as in your link). Using your first example this gives 4.84861419016704E+304 so close enough to Excel’s limit (of 9.99999999999999E+307) that no element of the first term would have to increase by much for the result to exceed the inbuilt limitation, for which your second example seems to have been enough.
Note that, as #Juliusz has pointed out, the first parameter is the interest rate per period and 19 represents 1900%. For 240 periods (looks like 20 years) the combination is not realistic (ie 1900% per month). Also, the rate should not normally be negative (better to handle whether in or out by signing the payment amount).
Seems that -19.45 is the minimum value, which translates to -1945%. Are you sure you need more?

Compressed trie implementation?

I am going through a Udacity course and in one of the lectures (https://www.youtube.com/watch?v=gPQ-g8xkIAQ&feature=player_embedded), the professor gives the function high_common_bits which (taken verbatim from the lecture) looks like this in pseudocode:
function high_common_bits(a,b):
return:
- high order bits that a+b in common
- highest differing bit set
- all remaining bits clear
As an example:
a = 10101
b = 10011
high_common_bits(a,b) => 10100
He then says that this function is used in highly-optimized implementations of tries. Does anyone happen to know which exact implementation he's referring to?
If you are looking for a highly optimized bitwise compressed trie (aka Radix Tree). The BSD routing table uses one in it's implementation. The code is not easy to read though.
He was talking about Succinct Tries, tries in which each node requires only two bits to store (the theoretical minimum).
Steve Hanov wrote a very approachable blog post on Succinct Tries here. You can also read the original paper by Guy Jacobson (written as recently as 1989) which introduced them here.
A compressed trie stores a prefix in one node, then branches from that node to each possible item that's been seen that starts with that prefix.
In this case he's apparently doing a bit-wise trie, so it's storing a prefix of bits -- i.e., the bits at the beginning that the items have in common go in one node, then there are two branches from that node, one to a node for the next bit being a 0, and the other for the next bit being a 1. Presumably those nodes will be compressed as well, so they won't just store the next single bit, but instead store a number of bits that all matched in the items inserted into the trie so far.
In fact, the next bit following a given node may not be stored in the following nodes at all. That bit can be implicit in the link that's followed, so the next nodes store only bits after that.

scale 14 bit word to an 8 bit word

I'm working on a project where I sample a signal with an ADC, that represents values as 14 bit words. I need to scale the values to 8 bit words. What's a good way to go about this in general. By the way, I'm using an FPGA so I'd like to do it in "hardware" rather than a software solution. Also in case you're wondering the chain of events will be: sample analog signal, represent sample value with 14 bit word, scale 14 bit word to 8 bit word, transmit 8 bit word with UART to PC COM1.
I've never done this before. I was assuming you use quantization levels, but I'm not sure what an efficient circuit for this operation would be. Any help would be appreciated.
Thanks
You just need an add and a shift:
val_8 = (val_14 + 32) >> 6;
(The + 32 is necessary to get correct rounding - you can omit it but you will get more truncation noise in your signal if you do.)
I think you just drop the six lowest resolution bits and call it good, right? But I might not fully understand the problem statement.
Paul's algorithm is correct, but you'll need some bounds checking.
assign val_8 = (&val_14[13:5]) ? //Make sure your sum won't overflow
8'hFF : //Assign all 1's if it will
val_14[13:6] + val_14[5];

Resources