Why does Direct3D 11 make a distinction between SRVs and UAVs? - direct3d

I have been playing with Direct3D 11 and was surprised to discover that an HLSL StructuredBuffer<T> must be bound to a Shader Resource View (SRV) whereas a RWStructuredBuffer<T> must be bound to a Uniform Access View (UAV). Looking deeper into this, it seems like all read-write shader resources require UAVs whereas readonly resources require SRVs.
Comparing the UNORDERED_ACCESS_VIEW_DESC and SHADER_RESOURCE_VIEW_DESC structures, UAVs are described with more or less a subset of the information to describe SRVs. The APIs to set UAVs and SRVs to pipeline stages are also very similar. Even the documentation of the two interfaces look like the same concept explained by two different technical writers:
SRV: A shader-resource-view interface specifies the subresources a shader can access during rendering.
UAV: A view interface specifies the parts of a resource the pipeline can access during rendering
I'm not very well-versed in D3D11, but it seems to me that the concept of UAVs complicates the API without much benefit. Does the distinction between SRVs and UAVs exist to better map to the hardware or because of technical restrictions? Or was it just an API design decision?

The distinction was probably introduced primarily for performance reasons. The performance characteristics of data that is only accessed for read are quite different from data that can be accessed for arbitrary writes combined with reads.
It's likely that on most hardware the memory backing the resources should be allocated in different types of memory for best performance and have different parameters determining things like how it is cached, how it is swizzled / tiled, aligned, etc. By separating the concepts at the API level the driver can be given more information on the intended usage of a resource when it is created and again when it is bound.

Related

Determining active and passive roles between objects, during early design stage

When modeling the interaction between two objects, I sometimes find myself considering which object should be active in the interaction and which should be passive, or whether the interaction should be managed by a third-party controller.
As an example let's assume we have an Item and a Container.
Three possible ways to handle the interaction between them could be.
item.Drop(container)
container.Store(item)
StorageController.Store(item, container)
Assuming all 3 approaches (and several others) can be correct in a given situation, what kind of considerations do you make to reach the most sustainable design choice?
The question is intentionally generic as I'm interested in the general principles people apply when assigning roles to objects, particularly if the overall architecture is at its early stages and it's yet not clear the level of responsibility that each object will take over time.

What is the difference between date transfer object (DTO) and representation object of domain driven design pattern?

I know DTO is returned by the server-side and received by the client-side, but I am confused by the representation object in DDD. I think they are almost the same. Can someone tell me their differences?
Can someone tell me their differences?
They solve different problems in different contexts
Data transfer is a boundary concern - how do we move information from here to there (across a remote interface)? Among the issues that you may run into: the transfer of information is slow, or expensive. One way of keeping this under control is to move information in a larger grain.
the main reason for using a Data Transfer Object is to batch up what would be multiple remote calls into a single call -- Martin Fowler, Patterns of Enterprise Application Architecture
In other words, a DTO is your programs representation of a fat message.
In DDD, the value object pattern is a modeling concern; it is used to couple immutable representations of information and related computations.
A DTO tends to look like a data structure, with methods that can be used to transform that data structure into a representation (for example: an array of bytes) that can be sent across a boundary.
A value object tends to look like a data structure, with methods that can be used to compute other information that is likely to be interesting in your domain.
DTO tend to be more stable (or at least backwards compatible) out of necessity -- because producer and consumer are remote from one another, coordinating a change to both requires more effort than a single local change.
Value objects, in contrast, are easier to change because they are a domain model concern. IF you want to change the model, that's just one thing, and correspondingly easier to coordinate.
(There's kind of a hedge - for system that need persistence, we need some way to get the information out of the object into a representation that can be stored and retrieved. That's not necessarily a value object concern, especially if you are willing to use general purpose data structures to move information in and out of "the model".)
In the kingdom of nouns, the lines can get blurry - partly because any information that isn't a general purpose data structure/primitive is "an object", and partly because you can often get away with using the same objects for your internal concerns and boundary cnocerns.

Sequence Diagram: Interactions with resources (DB, Network, Caches, etc)

I am currently making a behavior assessment of different software modules regarding access to DB, Network, amount of memory allocations, etc.
The main goal is to pick a main use case( let's say system initialization) and recognize the modules that are:
Unnecessarily accessing DB.
Creating too many caches for same data.
Making too many allocations (or too big) at once.
Spawning many threads,
Network access
By assessing those, I could have an overview of the modules that need to be reworked in order to improve performance, delete redundant DB accesses, avoid CPU usage peaks, etc.
I found the sequence diagram a good candidate to represent the use cases behavior, but I am not sure how to depict their interaction with the above mentioned activities.
I could do something like shown in this picture, but that is an "invention" of tagging functions with colors. I not sure if it is too simplistic or childish (too many colors?).
I wonder if there is any specific UML diagram to represent these kind of interactions.
Using SDs is probably the most appropriate approach here. You might consider timing diagrams in certain cases if you need to present timing constraints. However, SDs already have a way to show timing constraints which is quite powerful.
You should adorn your diagram with a comment telling that the length of the colored self-calls represent percentage of use or something like that (or just adding a title telling this). Using colors is perfect by the way.
As a side note: (the colored) self-calls are shown with a self-pointing arrow like this
but I'd guess your picture can be understood by anyone and you can see that as nitpicking. And most likely they are not real self-calls but just indicators. So that's fine too.
tl;dr Whatever transports the message is appropriate.

Can applications coexist within the same DHT?

If you create a new application which uses a distributed hash table (DHT), you need to bootstrap the p2p network. I had the idea that you could join an existing DHT (e.g. the Bittorrent DHT).
Is this feasable? Of course, we assume the same technology. Combining Chord with Kademlia is obviously not feasable.
If yes, would this be considered parasitic or symbiotic? Parasitic meaning that it conflicts with the original use somehow. Symbiotic, if it is good for both applications as they support each other.
In general: Kademlia and Chord are just abstract designs, while implementations provide varying functionality.
If its feature-set is too narrow you won't be able to map your application logic onto it. If it's overly broad for your needs it might be a pain to re-implement if no open source library is available.
For bittorrent: The bittorrent DHT provides 20byte key -> List[IP,Port] lookups as its primary feature, where the IP is determined by the sender IP and thus cannot be used to store arbitrary data. There are some secondary features like bloom filter statistics over those lists but they're probably even less useful for other applications.
It does not provide general key-value storage, at least not as part of the core specification. There is an extension proposal for that
Although implementations provide some basic forward-compatibility for unknown message types by treating them like node lookup requests instead of just ignoring them that is only of limited usefulness if your application supplies a small fraction of the nodes, since you're unlikely to encounter other nodes implementing that functionality during a lookup.
If yes, would this be considered parasitic or symbiotic?
That largely depends on whether you are a "good citizen" in the network.
Does your implementation follow the spec, including commonly used extensions?
Does your general use-case stay within an order of magnitude compared to other nodes when it comes to the traffic it causes?
Is the application lifecycle long enough to not lie outside the expected churn rates of the target DHT?

Modelling a permissions system

How would you model a system that handles permissions for carrying out certain actions inside an application?
Security models are a large (and open) field of research. There's a huge array of models available to choose from, ranging from the simple:
Lampson's Access control matrix lists every domain object and every principal in the system with the actions that principal is allowed to perform on that object. It is very verbose and if actually implemented in this fashion, very memory intensive.
Access control lists are a simplification of Lampson's matrix: consider it to be something akin to a sparse-matrix implementation that lists objects and principals and allowed actions, and doesn't encode all the "null" entries from Lampson's matrix. Access control lists can include 'groups' as a convenience, and the lists can be stored via object or via principal (sometimes, via program, as in AppArmor or TOMOYO or LIDS).
Capability systems are based on the idea of having a reference or pointer to objects; a process has access to an initial set of capabilities, and can get more capabilities only by receiving them from other objects on the system. This sounds pretty far-out, but think of Unix file descriptors: they are an unforgeable reference to a specific open file, and the file descriptor can be handed to other processes or not. If you give the descriptor to another process, it will have access to that file. Entire operating systems were written around this idea. (The most famous are probably KeyKOS and EROS, but I'm sure this is a debatable
point. :)
... to the more complex, which have security labels assigned to objects and principals:
Security Rings, such as implemented in Multics and x86 CPUs, among others, and provide security traps or gates to allows processes to transition between the rings; each ring has a different set of privileges and objects.
Denning's Lattice is a model of which principals are allowed to interact with which security labels in a very hierarchical fashion.
Bell-LaPadula is similar to Denning's Lattice, and provides rules to prevent leaking top-secret data to unclassified levels and common extensions provide further compartmentalization and categorization to better provide military-style 'need to know' support.
The Biba Model is similar to Bell-LaPadula, but 'turned on its head' -- Bell-LaPadula is focused on confidentiality, but does nothing for integrity, and Biba is focused on integrity, but does nothing for confidentiality. (Bell-LaPadula prevents someone from reading The List Of All Spies, but would happily allow anyone to write anything into it. Biba would happily allow anyone to read The List Of All Spies, but forbid nearly everyone to write into it.)
Type Enforcement (and its sibling, Domain Type Enforcement) provides labels on principals and objects, and specifies the allowed object-verb-subject(class) tables. This is the familiar SELinux and SMACK.
.. and then there are some that incorporate the passage of time:
Chinese Wall was developed in business settings to separate employees within an organization that provides services to competitors in a given market: e.g., once Johnson has started working on the Exxon-Mobil account, he is not allowed access to the BP account. If Johnson had started working on BP first, he would be denied access to Exxon-Mobil's data.
LOMAC and high-watermark are two dynamic approaches: LOMAC modifies the privileges of processes as they access progressively-higher levels of data, and forbids writing to lower levels (processes migrate towards "top security"), and high-watermark modifies the labels on data as higher-levels of processes access it (data migrates towards "top security").
Clark-Wilson models are very open-ended; they include invariants and rules to ensure that every state transition does not violate the invariants. (This can be as simple as double-entry accounting or as complex as HIPPA.) Think database transactions and constraints.
Matt Bishop's "Computer security: art and science" is definitely worth reading if you'd like more depth on the published models.
I prefer RBAC. Although, you can find it very similar to ACL, but they differ semantically.
Go through the following links:
http://developer.android.com/guide/topics/security/security.html
http://technet.microsoft.com/en-us/library/cc182298.aspx

Resources