One of my memory model files showing the notation IN22_R2P_W00064B080M02C064.mdt. I want to know what is W , B, M, C in the model name can any one explain ?
As Greg stated above, this is usually a naming convention to explain features (such as BIST, port widths/operations, etc.) and it most often is associated with process nodes (28nm, 14nmfinfet). Depends on what generated this model.
R2P would likely mean 2 port register file
W00064 likely means 64 bits in width.
and so on. But this just looks like a pure model and not something that will eventually be used for synthesis.
TSMC has a memory "compiler" that will generate these models and will usually name them something strange like this but makes sense once you read the manual on it.
Related
I would like to create a new data type in Rust on the "bit-level".
For example, a quadruple-precision float. I could create a structure that has two double-precision floats and arbitrarily increase the precision by splitting the quad into two doubles, but I don't want to do that (that's what I mean by on the "bit-level").
I thought about using a u8-array or a bool-array but in both cases, I waste 7 bits of memory (because also bool is a byte large). I know there are several crates that implement something like bit-arrays or bit-vectors, but looking through their source code didn't help me to understand their implementation.
How would I create such a bit-array without wasting memory, and is this the way I would want to choose when implementing something like a quad-precision type?
I don't know how to implement new data types that don't use the basic types or are structures that combine the basic types, and I haven't been able to find a solution on the internet yet; maybe I'm not searching with the right keywords.
The question you are asking has no direct answer: Just like any other programming language, Rust has a basic set of rules for type layouts. This is due to the fact that (most) real-world CPUs can't address individual bits, need certain alignments when referencing memory, have rules regarding how pointer arithmetic works etc. etc.
For instance, if you create a type of just two bits, you'll still need an 8-bit byte to represent that type, because there is simply no way to address two individual bits on most CPU's opcodes; there is also no way to take the address of such a type because addressing works at least on the byte-level. More useful information regarding this can be found here, section 2, The Anatomy of a Type. Be aware that the non-wasting bit-level type you are thinking about needs to fulfill all the rules mentioned there.
It's a perfectly reasonable approach to represent what you want to do e.g. either as a single, wrapped u128 and implement all arithmetic on top of that type. Another, more generic, approach would be to use a Vec<u8>. You'll always do a relatively large amount of bit-masking, indirecting and such.
Having a look at rust_decimal or similar crates might also be a good idea.
Is it possible to have two flops/any other instances have the same name in the netlist?
Considering that there is no hierarchy, say I have a design of 10M instances and there exists an flop called foo, is it possible that another flop have same name 'foo'?
No. Within a single scope, you cannot re-use the same identifier for another purpose,
As Dave says, no. But if you had 10M instances, you wouldn't have coded that manually, your logic synthesiser would have. And it won't output an illegal netlist.
The only way your question makes sense is to consider one large verilog file - obviously, here there can't be more than one reg/logic with the name foo. This is fundamental to the verilog scoping rules.
If there is any iteration or local scope of any form in your design, the elaboration process will construct a form of heirarchy to handle this iteration. If you flatten the resulting netlist (by default or design), then each element will either gain an abstract unique identifier (n1, n2, n3...), or be pre/post fixed with some heirarchical information (gen_1_foo, gen_2_foo...).
After the netlist generation, it may be non-trivial to relate a specific flop to it's syntactic source in the verilog - but you brought this on yourself by the lack of heirarchy and structure in the design.
We usually use const in c++ to imply that the value does not change (read only), why in GLSL/VK in the shader or resource definition they choose the word uniform ? Wodn`t be more consistent and use the keyword borrowed from c/c++
Beside that probably the uniform keyword in shader definitions give clues to the compiler to attach those resources as close to the hardware as possible, probably shared memory or registers ? Not sure on that.
That also probably why they mention in the VkSpec. that we need small ammounts of data for those type of resources. Like for eg: values of cosmological constants..etc
Is anything that I`m missing, or some bit of history that passed away ?
Uniforms in GPU programming and const in C++ are focused on different things.
C++ const documents that a variable is not intended to be changed, with some compiler enforcement. As such it's more about using the type system to improve clarity and enforce intended usage -- important for large-project software engineering. You can still get around it with const_cast or other tricks, and the compiler can't assume you didn't, so it's not strictly enforced.
The important thing about uniforms is that they're, well, uniform. Meaning they have the same value whenever they are read within a draw call. Since there might be hundreds to millions of reads of that value in a single draw call, this allows it to be cached, and just one copy of it to be cached, or that it can be preloaded into registers (or cache) before shaders run, that it can be cached in a non-coherent cache, that a single read result can be broadcast across all SIMD lanes in a core, etc. For this to work, the fact that the contents can't change must be strictly enforced (with memory aliasing you can get around even this, now, but results are very much undefined if you do). So uniform really isn't about declaring intent to other programmers for software engineering benefits like const is, it's about declaring intent to the compiler and driver so they can optimize based on it.
D3D uses "const" and "constant buffer" rather than uniform, so clearly there is some overlap. Though that does lead to saying things like "how many times do you update constants per frame?" which when you think about it is kind of a weird thing to say :). The values are constant within shader code, but very much aren't constant at the API level.
The etymology of the word is important here. The term "uniform" is derived from GLSL, which was inspired by the Renderman standard's shader terminology. In Renderman, "uniform" was used for values "whose values are constant over whatever portion of the surface begin shaded". This was an alternative to "varying" which represented values interpolated across the surface.
"Constant" would imply that the value never changes. Uniform values do change; they simply don't change at the same frequency as other values. Input values change per-invocation, uniform values change per-draw call, and constant values don't change. Note that in GLSL, const usually means "compile-time constant": a value that is set at compile time and is never changed.
A uniform variable in Vulkan ultimately comes from a resource that exists outside of the shader. Blocks of uniform variables fed by buffers, uniforms in push constants fed by push constant state are both external resources, set by the user. That's a fundamentally different concept from having a compile-time constant struct.
Since it's different from a constant struct, it needs a different term to request it.
What is the approximate maximum state space size of modern model checkers, like NuSMV.
I do not need an exact number but some state size value, where the run time is still acceptable (say a few weeks).
What kind of improvements, beyond symbolic model checking, are used to raise that limit?
The answer varies wildly, depending, among other factors, on:
what model checking algorithm is used
how the system is represented
how the model checker (or other tool) is implemented
what hardware the software is running on (and parallelization etc).
Instead of mentioning some specific number of states, I will instead note a few relevant factors (I use "specification" below as a synonym to "model"):
Symbolic or enumerative: Symbolic algorithms scale differently than enumerative ones. Also, for the same problem, there are typically differences in the computational complexity of known symbolic and enumerative algorithms.
Enumeration is relatively predictable in behavior, in that a state space with N states will most likely take a shorter time to enumerate than a state space with 1000000 * N states.
Symbolic approaches based on binary-decision diagrams (BDDs) can behave in ways (nearly) unrelated to how many states are reachable based on the specification. The main factor is what kind of Boolean functions arise in the encoding of the specification.
For example, a specification that involves a multiplier would result in BDDs that are exponentially large in the number of bits that represent the state, so of size linear in the number of states (assuming that the reachable states are exponentially more than than the bits used to represent those states). In this case, a state space with 2^50 states that may otherwise be amenable to symbolic analysis becomes prohibitive.
In other words, it is not only the number of states, but the kind of Boolean function that the system action corresponds to that matters (an action in TLA+ corresponds to what a transition relation represents in other formalisms). In addition, choosing a different encoding (of integers by bits) can have an impact on BDD size.
Symmetry (e.g., partial order reduction), and abstraction are some improvements to approach the analysis of more complex systems.
Acceptable runtime is a relative notion. Whatever the model checking approach, there is always a limit where the model's fidelity reaches the available time.
Another approach is to write a specification that has unspecified parameters, then use a model checker to find errors in instances of the specification that correspond to small parameter values, and after correcting these, then use a theorem prover to ensure correctness of the specification. This approach is supported by tools for TLA+, namely the model checker TLC and the theorem prover TLAPS.
Regarding terminology ("specification" above), please see "What good is temporal logic?" by Leslie Lamport.
Also worth noting that, depending on the approach, the number of states, and the number of reachable states can be different notions. Usually, this matters in typed formalisms: we can specify a system with 1 reachable state, but declare variable types that result in many more states, most of which are unreachable from the initial conditions. In symbolic approaches, this affects the encoding, thus the size of BDDs.
References relevant to state space size:
Bwolen Yang, Randal E. Bryant, David R. O’Hallaron, Armin Biere, Olivier Coudert, Geert Janssen, Rajeev K. Ranjan, Fabio Somenzi, A performance study of BDD-based model checking, FMCAD, 1998, DOI: 10.1007/3-540-49519-3_18
Radek Pelánek, Properties of state spaces and their applications, STTT, 2008, DOI: 10.1007/s10009-008-0070-5 (and a relevant website)
Radek Pelánek, Typical structural properties of state spaces, SPIN, 2004, DOI: 10.1007/978-3-540-24732-6_2
Yunja Choi, From NuSMV to SPIN: Experiences with model checking flight guidance systems, FMSD, 2007, DOI: 10.1007/s10703-006-0027-9
J.R. Burch, E.M. Clarke, K.L. McMillan, D.L. Dill, L.J. Hwang, Symbolic model checking: 10^20 states and beyond, LICS, 1990, DOI: 10.1109/LICS.1990.113767
For my understanding, the ApplicationDataType was introduced to AUTOSAR Version 4 to design Software-Components that are independent of the underlying platform and are therefore re-usable in different projects and applications.
But how about the implementation behind such a SW-C to be platform independent?
Use-case example: You want to design and implement a SW-C that works as a FiFo. You have one Port for Input-Data, an internal buffer and one Port for Output-Data. You could implement this without knowing about the data type of the data by using the “abstract” ApplicationDataType.
By using an ApplicationDataType for a variable as part of a PortInterface sooner or later you have to map this ApplicationDataType to an ImplementationDataType for the RTE-Generator.
Finally, the code created by the RTE-Generator only uses the ImplementationDataType. The ApplicationDataType is nowhere to be found in the generated code.
Is this intended behavior or a bug of the RTE-Generator?
(Or maybe I'm missing something?)
It is intended that ApplicationDataTypes do not directly appear in code, they are represented by their ImplementationDataType counterparts.
The motivation for the definition of data types on different levels of abstraction is explained in the AUTOSAR specifications, namely the TPS Software Component Template.
You will never find an ApplicationDataType in the C code, because it's defined on a physical level with a physical unit and might have a (completly) different representation on the implementation level in C.
Imagine a battery control sensor that measures the voltage. The value can be in range 0.0V and 14.0V with one digit after the decimal point (physical). You could map it to a float in C but floating point operations are expensive. Instead, you use a fixed point arithmetic where you map the phyiscal value 0.0 to 0, 0.1 to 1, 0.2 to 2 and so on. This mapping is described by a so called compuMethod.
The software component will always use the internal representation. So, why do you need the ApplicationDataType then? There are many reasons to use them, some of them are:
Methodology: The software component designer doesn't need to worry about the implementation in C. Somebody else can define that in a later stage.
Measurement If you measure the value, you have a well defined compuMethod and know the physical interpretation of the value in C.
Data conversion: If you connect software component with different units e.g. km/h vs mph, the Rte could automatically convert the internal representation between them.
Constant conversion: You can specify an initial value on the physical value (e.g. 10.6V) and the Rte will convert it to the internal representation.
Variable Size Arrays: Without dynamic memory allocation, you cannot have a variable size array in C. But you could reserve some (max) memory in an array and store the actual length in a seperate field. On the implementation level you have then a struct with two members (value, length). But on the application level you just have an array.
from AUTOSAR_TPS_SoftwareComponentTemplate.pdf
ApplicationDataType defines a data type from the application point of
view. Especially it should be used whenever something "physical" is at
stake.
An ApplicationDataType represents a set of values as seen in the
application model, such as measurement units. It does not consider
implementation details such as bit-size, endianess, etc.
It should be possible to model the application level aspects of a VFB
system by using ApplicationDataTypes only.