What precisely is the "silence" value in the SDL audio API? - audio

In SDL, when you set up your audio output device, you and SDL have to agree on an audio format - e.g. 44.1KHz stereo 16-bit signed little-endian. That's fine. But along with the final agreed format, you also get a computed "silence" value which doesn't seem well documented.
A silent sound sample obviously consist of the same sample value repeated over and over again, and you want that to be at the "zero" level. In a sense any constant value will do, but you have to agree a value (so you don't get pops when switching to a different sound), and in a sane world you want to choose a value bang in the centre of your sample-value range.
So if you happen to use an unsigned format for your sample value range for 0..whatever, your silence value will be (whatever/2).
EDIT - inserted "unsigned" below to avoid confusion.
That's all fine. But the silence value you get given is an unsigned 8-bit integer. That doesn't work very well if you want unsigned 16 bit samples - the logical silence value of 0x8000 requires two different byte values and it requires them to be in the correct endian order.
So the silence value you get from SDL doesn't seem to make much sense. You can't use it to wipe your buffers, for instance, without dealing with extra complications and making inferences which pretty much make the precalculated silence value pointless anyway.
Which means, of course, that I've misunderstood the point.
So - if this isn't how the silence value is meant to be used, how should it be used?

I have no evidence to back this up but I think the assumption here is that "silence" could be interpreted as "silence for common soundcard formats". Those being:
Unsigned 8-bit integers
Signed 16-bit integers
Signed 32-bit integers (for 24-bit audio data)
Normalized 32-bit floating point
Normalized 64-bit floating point.
In all the cases except for unsigned 8-bit, zero (0) is the "zero amplitude" value. So the returned unsigned 8-bit integer contains all the possible values of "zero amplitude" for these formats.

Related

How to use leading_zeros/trailing_zeros in platform independent way?

I want find the first non-zero bit in the binary representation of a u32. leading_zeros/trailing_zeros looks like what I want:
let x: u32 = 0b01000;
println!("{}", x.trailing_zeros());
This prints 3 as expected and described in the docs. What will happen on big-endian machines, will it be 3 or some other number?
The documentation says
Returns the number of trailing zeros in the binary representation
is it related to machine binary representation (so the result of trailing_zeros depends on architecture) or base-2 numeral system (so result will be always 3)?
The type u32 respresents binary numbers with 32 bits as an abstract concept. You can imagine them as abstract, mathematical numbers in the range from 0 to 232-1. The binary representation of these numbers is written in the usual convention of starting with the most significant bit (MSB) and ending with the least significant bit (LSB), and the trailing_zeros() method returns the number of trailing zeros in that representation.
Endianness only comes into play when serializing such an integer to bytes, e.g. for writing it to a bytes buffer, a file or the network. You are not doing any of this in your code, so it doesn't matter here.
As mentioned above, writing a number starting with the MSB is also just a convention, but this convention is pretty much universal today for numbers written in positional notation. For programming, this convention is only relevant when formatting a number for display, parsing a number from a string, and maybe for naming methods like trailing_zeros(). When storing an u32 in a register, the bits don't have any defined order.

What are UInt16LE, UInt16BE, etc. in Node JS?

In all of my time programming I have squeaked by without ever learning this stuff. Would love to know more about what these are and how they are used:
UInt8
UInt16LE
UInt16BE
UInt32LE
UInt32BE
Int8
Int16LE
Int16BE
Int32LE
Int32BE
FloatLE
FloatBE
DoubleLE
DoubleBE
See https://nodejs.org/api/buffer.html#buffer_buf_readuint8_offset_noassert for where Node uses these.
This datatypes are related to number representation in appropriate byte-order. It typically essential for:
Network protocols
Binary file formats
It is essential because one system should write integers/floats in such way that will give the same value on reader side. So what format to be used is just convention between two sides (writer and reader).
What acronyms means:
BE suffix stands for BigEndian
LE stands for LittleEndian
Int is Integer
Uint is Unsigned Integer
Appropriate number in integers is number of bits in the word.

Why Use Both Little and Big Endian in WAV Header

Over the years, or at least what I have heard, there has been many debates over whether to use big or little endian. However, I've wondered when do you see both? Odd question, right?
Upon having to decode WAV files, I noticed the header was composed of different segments which could be either big or little endian.
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
In the forum specified here, it specifies the reason for using little-endian for a large percentage of the file ( Why are an integers bytes stored backwards? Does this apply to headers only?)
'WAV files are little-endian (least significant bytes first) because the format originated for operating systems running on intel processor based machines which use the little endian format to store numbers.'
However, I have yet to find why is big-endian used as well?
Thanks in advance
It's a bit of a stretch to say the chunk ID's are big endian. In practice the IDs are 4 character ASCII strings (unterminated), e.g. 'RIFF', 'fmt ' and 'data'. If you restrict yourself to string comparisons then you can avoid the need to concern yourself with the byte ordering. As such, the waveformat structures are typically defined like the following in c:
typedef struct WAVHEADER
{
char riff[4];
int chunkSize;
etc...
}

Verilog: Converting BCD (or binary) to BCH

I'm looking to code a BCD (or binary) to binary-coded hexadecimal which will be then converted to 7-segment display codes and sent serially to a latched shift register to drive the display. It's for a 16-bit microprocessor, that outputs signed 16-bit number.
I've already successfully coded and fully tested a binary to BCD converter using the shift and add 3 algorithm. The number is converted to positive if negative and a sign flag is set to notate sign. Most design example I saw on the internet were combinatorial. I took a sequential approach to it, and takes around 35 clock cycles to do so.
My question is, is there a way to convert the BCD I have to BCH? Or perhaps it would be easier to convert the binary to BCH. Whichever way is more feasible. Performance is not an issue. Is there an existing algorithm to do so?
I appreciate your responses.
You should just use a look up table. Have an input to your case statement be your BCD digit, and the output be your BCH digit. Both will be 4 bits wide guaranteed, so you can parse your BCD digits one at a time and each one will produce a 4 bit output.
Converting from Binary to BCD is harder because you need to use a double dabble algorithm (as you have found out). But once it's in BCD you shouldn't have a problem going to BCH.

XSD: What is the difference between xs:integer and xs:int?

I have started to create XSD and found in couple of examples for xs:integer and xs:int.
What is the difference between xs:integer and xs:int?
When I should use xs:integer?
When I should use xs:int?
The difference is the following:
xs:int is a signed 32-bit integer.
xs:integer is an integer unbounded value.
See for details https://web.archive.org/web/20151117073716/http://www.w3schools.com/schema/schema_dtypes_numeric.asp
For example, XJC (Java) generates Integer for xs:int and BigInteger for xs:integer.
The bottom line: use xs:int if you want to work cross platforms and be sure that your numbers will pass without a problem.
If you want bigger numbers – use xs:long instead of xs:integer (it will be generated to Long).
The xs:integer type is a restriction of xs:decimal, with the fractionDigits facet set to zero and with a lexical space which forbids the decimal point and trailing zeroes which would otherwise be legal. It has no minimum or maximum value, though implementations running in machines of finite size are not required to be able to accept arbitrarily large or small values. (They are required to support values with 16 decimal digits.)
The xs:int type is a restriction of xs:long, with the maxInclusive facet set to 2147483647 and the minInclusive facet to -2147483648. (As you can see, it will fit conveniently into a two-complement 32-bit signed-integer field; xs:long fits in a 64-bit signed-integer field.)
The usual rule is: use the one that matches what you want to say. If the constraint on an element or attribute is that its value must be an integer, xs:integer says that concisely. If the constraint is that the value must be an integer that can be expressed with at most 32 bits in twos-complement representation, use xs:int. (A secondary but sometimes important concern is whether your tool chain works better with one than with the other. For data that will live longer than your tool chain, it's wise to listen to the data first; for data that exists solely to feed the tool chain, and which will be of no interest if you change your tool chain, there's no reason not to listen to the tool chain.)
I would just add a note of pedantry that may be important to some people: it's not correct to say that xs:int "is" a signed 32-bit integer. That form of words implies an implementation in memory (or registers, etc) within a binary digital computer. XML is character-based and would implement the maximum 32-bit signed value as "2147483647" (my quotes, of course), which is a lot more than 32 bits! What IS true is that xs:int is (indirectly) a restriction of xs:integer which sets the maximum and minimum allowed values to be the same as the corresponding implementation-imposed limits of a 32-bit integer with a sign bit.

Resources