Implementing a graph-based SDN network management system as a DFD - modeling

Can I use Data Flow Diagram (DFD) to implement an SDN Network Management System? The developed system handles topology modeling and storage (as a Graph in a graph DB), load balancing, security, acquiring traffic statistics, and routing. All these functions are performed as a cooperation between the graph database and the SDN controller. The results are projected as services in the SDN Application Plane.
If DFD is not applicable, please suggest an alternative.
Thanks in advance.

Data Flow Diagrams can represent any kind of processing, since it show how data moves between systems or processes or storage. DFD allow also to decompose systems/processes into sub-systems/sub-processes to see the details of a system's internal flows, up to the desired level of details.
However, DFD has several limitations, in view of your domain:
It does not provide elaborate data modeling capabilities. It is based on a data dictionary and assumes a hierarchical decomposition of the data (to decompose the flows). It could be supported by complementary modelling techniques such as ERD, and would be sufficient to model SDN network packets.
It is not suitable for object oriented technologies that you'd probably need for implementing your system.
It is not suited for complex real time data flows that require timing constraints, timeout events, and other event-driven processing. That are typical for networking domains.
A better alternative would be UML. It provides:
activity diagrams that provide similar capabilities than DFD, but with a precise semantic and object-oriented and event-processing capabilities.
sequence diagram, to model exchanges between network components, that can express network protocols in a very clear manner.
state diagrams, to model the state of individual network components
deployment diagram to model a physical network architecture.

Related

Open-Host service, published language and canonical data model

The published language of the open-host service (OHS) can be seen as an integration-oriented model to help simplifying the public interface of the OHS consumers. The published language is integration-optimized and exposes data in a more convenient model for consumers, specifically designed for integration needs.
When we are promoting the idea that each bounded context has its own internal (canonical data) model, the OHS in fact decouples the bounded context's internal model from the model used for integration with other bounded contexts. So the bounded context internal model can evolve without impacting the consumers of the OHS.
Say we want to design the integration model of the OHS or even multiple OHS's. Don't we have here the concept of the old school canonical data model we used for integration in the ESB / SOA era? In fact one can say that designing the integration models for the public OHS even contributes to the concept of the interchange context: a separate bounded context mainly in charge of transforming models for more convenient consumption by other components.
So question is: are we not going back to the 'old school' canonical data model from SOA ESB era with the concept of interchange context as a separate bounded context in charge of transforming models for more convenient consumption by other components? If not, what is the difference?

Understanding onion architecture

Onion Architecture Mockups
Above are two images that depict my understanding of Onion Architecture.
They are slightly different from the drawings found online, because they address an agenda that I cannot find an answer to.
Infrastructure, as far as I can tell, are things like persistence, logging, etc. I have written examples of them in italics. However, a lot of the time, infrastructure components, as well as UI, tend to need to communicate with one another. The UI might want to audit or log something, the persistence project may need to log something. Logging being one of the harder to fit items in onion architecture, my understanding is that a lot of people have different opinions on where you should and shouldn't log.
In my first drawing, I have put an Infrastructure Interfaces layer in the diagram to allow cross communication without any one component knowing the implementation of another component. This is something that I have seen in a few examples.
The second drawing is my preference, it uses a mediator to cross communicate between infrastructure, UI, and its basically a way to allow the core services to communicate with infrastructure indirectly (assume Service Interfaces is called Core Services on the right diagram). The logger would subscribe itself to certain events, as would the database etc.
The first diagram allows only pocos and interfaces in all layers except the outer layer (excluding the dependency resolver). The second allows domain and business logic in the core service layer and allow the infrastructure layers to do their jobs in isolation.
I justified the infrastructure components by ensuring that they had an output of some sort. Auditing and Logging would usually use a db of some sort, cache would usually store in memory and db should probably have been called persistence. However, there is a library called AutoMapper. I have seen it wrapped in some instances, so that its interface can go in the Core to be used in pretty much any infrastructure, but it seems like over abstraction to me. Automapper is kind of like the Events object in that all infrastructures use it to translate between itself and the domain, but I'm not sure if it fits in that layer since it is not a service.
Question: Which of the two is closest to the definition of onion architecture and where would you fit in a tool like auto mapper, and do you think trying to wrap something like that is over abstraction?
Thanks!
I've used Auto Mapper and the Onion Architecture. We configured AutoMapper in the MVC Global.asax file, that typically calls a Config Method in the AutoMapperConfig Class in the App Start directory.
Regarding your graphics, it appears one of them has a separate layer for the Mediator and Observer Patters. They're not necessarily needed but it entirely depends on your approach. Just as you can use Model-View-Controller Pattern in the Onion Architecture or Model-View-Presenter or Model-View-ViewModel. They're just coupling separate Patterns to incorporate some added benefit.
Here's where I first came across the Onion Architecture Jeffery Palermo. If you're wanting to a see a more pure graphical representation.

Modeling Multi-Threaded Applications

Can anyone recommend any methodology/software for modeling multi-threaded applications?
As part of any application design, there is always a need to do modeling using UML. However, a single thread design is usually assumed in the initial modeling. I do not know of how I can model multi-threaded applications.
Multi-threaded applications are best modeled in UML using either state machines of activity diagrams.
State machines have composite states with "orthogonal" regions that have states that are active in parallel, execute in parallel and can react to events in parallel.
Activity diagrams have fork and join nodes that create parallel execution flows inside the activity.
Each one of these diagrams has pros and cons. If your system is reactive I would surely go for a state machine. If you are developing more of an information system, activity diagrams are better.

DDD vs N-Tier (3-Tier) Architecture

I have been practicing DDD for a while now with the 4 distinct layers: Domain, Presentation, Application, and Infrastructure. Recently, I introduced a friend of mine to the DDD concept and he thought it introduced an unnecessary layer of complexity (specifically targeting interfaces and IoC). Usually, its at this point, I explain the benefits of DDD-- especially, its modularity. All the heavy lifting and under the hood stuff is in the Infrastructure and if I wanted to completely change the underlying data-access method, I could do so with only having to touch the Infrastructure layer repository.
My friend's argument is that he could build a three tiered application in the same way:
Business
Data
Presentation
He would create business models (like domain models) and have the repositories in the Data layer return those Business models. Then he would call the business layer which called the data layer. I told him the problem with that approach is that it is not testable. Sure, you can write integration tests, but you can't write true unit tests. Can you see any other problems with his proposed 3-tiered approach (I know there is, because why would DDD exist otherwise?).
EDIT: He is not using IoC. Each layer in his example is dependent on one another.
I think you're comparing apples and oranges. Nothing about N-Tier prohibits it from utilizing interfaces & DI in order to be easily unit-tested. Likewise, DDD can be done with static classes and hard dependencies.
Furthermore, if he's implementing business objects and using Repositories, it sounds like he IS doing DDD, and you are quibbling over little more than semantics.
Are you sure the issue isn't simply over using DI/IoC or not?
I think you are mixing a few methodologies up. DDD is Domain-Driven Developement and is about making the business domain a part of your code. What you are describing sounds more like the Onion Architecture (link) versus a 'normal' 3-layered approach. There is nothing wrong with using a 3-layered architecture with DDD. DDD depends on TDD (TestDriven Developement). Interfaces help with TDD as it is easier to test each class in isolation. If you use Dependency Injection (and IoC) it is further mitigated.
The Onion Architecture is about making the Domain (a.k.a. business rules) independent of everything else - ie. it's the core of the application with everything depending on the business objects and rules while things related to infrastructure, UI and so on are in the outer layers. The idea is that the closer to the 'shell of the onion' a module is - the easier it is to exchange for a new implementation.
Hope this clears it a bit up - now with a minor edit!
Read "Fundamentals of Software Architecture: An Engineering Approach", Chapter 8, Page 100 to 107.
The top-level partitioning is of particular interest to architects because it defines the fundamental architecture style and way of partitioning code. It is one of the first decisions an architect must make. These two styles (DDD & Layered) represent different ways to top-level partition the architecture. So, you are not comparing apples and oranges here.
Architects using technical partitioning organize the components of the system by technical capabilities: presentation, business rules, persistence, and so on.
Domain partitioning, inspired by the Eric Evan book Domain-Driven Design, which is a modeling technique for decomposing complex software systems. In DDD, the architect identifies domains or workflows independent and decoupled from each other.
The domain partitioning (DDD) may use a persistence library and have a separate layer for business rules, but the top-level partitioning revolves around domains. Each component in the domain partitioning may have subcomponents, including layers, but the top-level partitioning focuses on domains, which better reflects the kinds of changes that most often occur on projects.
So you can implement layers on each component of DDD (your friend is doing the opposite, which is interesting and we might try that out as well).
However, please note that ("Fundamentals of Software Architecture: An Engineering Approach", Page 135)
The layered architecture is a technically partitioned architecture (as
opposed to a domain-partitioned architecture). Groups of components,
rather than being grouped by domain (such as customer), are grouped by
their technical role in the architecture (such as presentation or
business). As a result, any particular business domain is spread
throughout all of the layers of the architecture. For example, the
domain of “customer” is contained in the presentation layer, business
layer, rules layer, services layer, and database layer, making it
difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered
architecture style.
Everything in architecture is a trade-off, which is why the famous answer to every architecture question in the universe is “it depends.” Being said that, the disadvantage with your friend's approach is, it has higher coupling at the data level. Moreover, it will creates difficulties in untangling the data relationships if the architects later want to migrate this architecture to a distributed system (ex. microservices).
N Tier or in this case 3-tier architecture is working great with unit tests .
All you have to do is IoC (inversion of control) using dependency injection and repository pattern .
The business layer can validate , and prepare the returned data for the presentation \ web api layer by returning the exact data which is required .
You can then use mock in you unit tests all the way through the layers.
All your business logic and needs can be on bl layer . And Dal layer will contain repositories injected from higher level.

Model an information system which communicates with other information systems in a UML-diagram?

I have to develop an integration concept to integrate my software as a subsystem into an enterprise information system which communicates with other information systems in other institutions.
I want to show a diagram which explains how the several subsystems are connected and which data is communicated between the subsystems. My problem is that I'm not sure if there is in the UML language a diagram-type that supports modeling of complete information systems.
I thought about the deployment diagram, but I am not sure if it is the right. I don't want to start an then recognize that it is the wrong way.
Is there any advice which diagram should be used, or if there is an alternative modeling language for complex information systems?
A component diagram is what you want - see chapter 25 of "The Unified Modeling Language User Guide".
I want to show a diagram which explains how the several subsystems work together and which data is communicated between the subsystems.
I'd probably start with a conceptual sequence diagram.
So, for example, you could have your lifelines represent the various components that you're integrating with, and your interactions could be any of the messages that need to transfer to and from those components.

Resources