What's 'D' stands for on the numbers of MSC Nastran .f06 output file? - nastran

I performed flutter analysis using MSC Nastran, and I want to extract the generalized aerodynamic matrix, which is Qhh in the .f06 output file.
I have kinda succeeded, but as shown in the figure, the numbers include 'D', not 'E', which is quite new to me.
Part of Nastran Output .f06 file
Does anyone have a idea what this 'D' in the number stands for?
Thanks.

E = exponential notation (SINGLE PRECISION)
D = DOUBLE PRECISION
Those are two ways to store REAL numbers in memory.
DOUBLE PRECISION numbers usually have at least twice the number of significant decimal digits.
You can read more here: fortran_datatype

Related

To how many decimal places is bc accurate?

It is possible to print to several hundred decimal places a square root in bc, as it is in C. However in C it is only accurate to 15. I have checked the square root of 2 to 50 decimal places and it is accurate but what is the limit in bc? I can't find any reference to this.
To how many decimal places is bc accurate?
bc is an arbitrary precision calculator. Arbitrary precision just tells us how many digits it can represent (as many as will fit in memory), but doesn't tell us anything about accuracy.
However in C it is only accurate to 15
C uses your processor's built-in floating point hardware. This is fast, but has a fixed number of bits to represent each number, so is obviously fixed rather than arbitrary precision.
Any arbitrary precision system will have more ... precision than this, but could of course still be inaccurate. Knowing how many digits can be stored doesn't tell us whether they're correct.
However, the GNU implementation of bc is open source, so we can just see what it does.
The bc_sqrt function uses an iterative approximation (Newton's method, although the same technique was apparently known by the Babylonians in at least 1,000BC).
This approximation is just run, improving each time, until two consecutive guesses differ by less than the precision requested. That is, if you ask for 1,000 digits, it'll keep going until the difference is at most in the 1,001st digit.
The only exception is when you ask for an N-digit result and the original number has more than N digits. It'll use the larger of the two as its target precision.
Since the convergence rate of this algorithm is faster than one digit per iteration, there seems little risk of two consecutive iterations agreeing to some N digits without also being correct to N digits.

directx local space coordinates float accuracy

I'm a bit confused of the local space coordinate system. Suppose I have a complex object in the local space. I know when I want to put it in the world space I have to multiply it with Scale,Rotate,Translate matrix. But the problem is the local coordinate only ranged from -1.0f to 1.0f, when I want to have vertex like (1/500,1/100,1/100) things will not work, everything will become 0 due to the float accuracy problem.
The only solution to me now is separate them into lots of local space systems and ProjectView each individually to put them together. It seems not the correct way of solving the problem. I've been checked lots of books but none of them mentioned this issue. I really want to know how to solve it.
when I want to have vertex like (1/500,1/100,1/100) things will not work
What makes you think that? The float accuracy problem does not mean something will coerce to 0 if it can't be accurately represented. It just means, it will coerce to the floating point number closest to the intended figure.
It's the very same as writing down, e.g., 3/9 with at most 6 significant decimal digits: 0.33334 – it didn't coerce to 0. And the very same goes for floating point.
Now you may be familiar with scientific notation: x·10^y – this is essentially decimal floating point, a mantissa x and an exponent y which essentially specifies the order of magnitude. In binary floating point it becomes x·2^y. In either case the significant digits are in the mantissa. Your typical floating point number (in OpenGL) has a mantissa of 23 bits, which boils down to an amount of 22 significant binary digits (which are about 7 decimal digits).
I really want to know how to solve it.
The real trouble with floating point numbers is, if you have to mix and merge numbers over a large range of orders of magnitudes. As long as the numbers are of similar order of magnitudes, everything happens with just the mantissa. And that one last change in order of magnitude to the [-1, 1] range will not hurt you; heck this can be done by "normalizing" the floating point value and then simply dropping the exponent.
Recommended read: http://floating-point-gui.de/
Update
One further thing: If you're writing 1/500 in a language like C, then you're performing an integer division and that will of course round down to 0. If you want this to be a floating point operation you either have to write floating point literals or cast to float, i.e.
1./500.
or
(float)1/(float)500
Note that casting one of the operands to float suffices to make this a floating point division.

Supress scientific notation without knowing length of number?

In python, how could I go about supressing scientific notation with complete precision WITHOUT knowing the length of number?
I need python to dynamically be able to return the number in normal form with exact precision no matter how large the number is, and to do it without any trailing zeros. The numbers will always be integers but they will be getting very large and I need them to be completely accurate. Even a single digit being rounded or changed would mess up my program.
Any ideas?
Use the decimal class.
Unlike hardware based binary floating point, the decimal module has a user alterable
precision (defaulting to 28 places) which can be as large as needed for a given
problem.
From https://docs.python.org/library/decimal.html

How to measure "probability" that string is some sort of code or nonsense

Let's assume that we have following strings:
q8GDNG8h029751
DNS
stackoverflow.com
28743.8.4.919
q7Q5w5dP012855
Martin_Luther
0000000100000000-0000000160000000
1344444967\.962
ExTreme_penguin
Obviously some of those can be, by our brain, classified as strings containing information, stings that have some "meaning" for humans. On the other hand, there are strings like "q7Q5w5dP012855" that are definitely some codes that could mean something only to computer.
My question is: Can we calculate some probability that string can actually tell us something?
I have some thoughts as doing frequency analysis or calculating capital letters etc. but it would be convenient to have something more 'scientific'
If you know the language that the strings are in you could use digram or trigram letter frequencies for the words in that language. These are quite small lookup tables [26 x 26]
or [26 x 26 x 26] each entry can be a floating point number which is the probability of that string occurring in the language. Many of these would be zero for meaningless string. You could add them up or simply count the number of zero probability sequences.
Of course this needs setting up for each language.

Compression using Ascii, trying to figure out how many bits to store the following efficiently

I am trying to learn the basics of compression using only ASCII.
If I am sending an email of strings of lower-case letters. If the file has n
characters each stored as an 8-bit extended ASCII code, then we need 8n bits.
But according to Guiding principle of compression: we discard the unimportant information.
So using that we don't need all ASCII codes to code strings of lowercase letters: they use only 26 characters. We can make our own code with only 5-bit codewords (25 = 32 > 26), code the file using this coding scheme and then decode the email once received.
The size has decreased by 8n - 5n = 3n, i.e. a 37.5% reduction.
But what IF the email was formed with lower-case letters (26), upper-case letters, and extra m characters and they have to be stored efficiently?
If you have n symbols of equal probability, then it is possible to code each symbol using log2(n) bits. This is true even if log2(n) is fractional, using arithmetic or range coding. If you limit it to Huffman (fixed number of bits per symbol) coding, you can get close to log2(n), with still a fractional number of bits per symbol on average.
For example, you can encode ten symbols (e.g. decimal digits) in very close to 3.322 bits per symbol with arithmetic coding. With Huffman coding, you can code six of the symbols with three bits and four of the symbols with four bits, for an average of 3.4 bits per symbol.
The use of shift-up and shift-down operations can be beneficial since in English text you expect to have strings of lower case characters with occasional upper case characters. Now you are getting into both higher order models and unequal frequency distributions.

Resources