Is it possible to implement Scrum while building a Monolith? - domain-driven-design

I am actually learning Scrum, and there is no recommendation for a particular app architecture.
I have the feeling that Scrum is designed to go with Domain Driven Design, and that every Sprint should map to a Bounded Context, which are better implemented with Microservice.
So I am wondering if Scrum can be fully implemented while building a Monolith ?

It depends on your definition of monolith. If you just mean a very large centralized system, this is completely possible (though one could argue against that architectural approach in general).
On the other hand, if you mean that there is a singular system designed in its entirety up-front and then built as one large deliverable, then no, Scrum is specifically an incremental approach to development and building in large phase-gates runs directly against Scrum.

Related

How to classify services in microservices?

I am new in microservices. I am coming from monolithic background in current environment i have different kinds services for different purposes like search, file, email, notification. I have taken so many courses but in that the instructor separate each entity and make it's own database also create API for that(like separate shopping cart entity, product entity) it makes no sense, I am not getting what is real world use of microservices or how to make separate component to build it's own microservice.
Can anyone give Real Project example?
Thanks in advance
Read this and this. Also look here and here. I don't think that anyone will give a link to the real working project, so you can try this.
I am not getting what is real world use of microservices
mostly as you heard in all of those tutorials the microservices architecture leverage advantages of:
the smaller services are easy to maintain and develop
easily can scale specific services rather than the whole project(monolith). for example you scale service-1 to 4 instances that request traffic split into these 4 instance and service-2 to 2 instances and go on (load balance). and these services may distributed in to different servers and locations.
if one service failed to work it does not terminate the whole system since they are independent.
services can be reusable for other scenarios or features.
small team can works for each services and its easy to manage both project and development flow.
and also it suffer from disadvantages of
services are simple and small but all as a whole system is complex so designing part are very critical.
poor performance and it requires do some extras to improve the performance (different types of caching on different levels).
transactions are complex and its developments are time costly. imagine simple update should be projected to other services if its required and you have to consider failure and rollback strategy ( SAGA ).
how to make separate component to build it's own microservice
this is the most challenging part of microservices. you need deep study on Domain driven design DDD.
Decompose by subdomain
Decompose by Business Capabilities
Can anyone give Real Project example?
there are many projects the develop microservices with different patterns. I think you have to start your own and make your hands dirty.

Should I be moving to a microservices based architecture?

I am working on a monolith system. All of it's code is in one repository (Web API and background workers). System is written in Nodejs and MongoDB (Mongoose) is used as a data store. My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
Monolith architecture creates some problems:
If my background workers needs to scale. I have to deploy all the project to the server despite only using a small fraction of it.
All system must be redeployed when code changes. What if payment processor calls webhook while system is being redeployed?
Using microsevices advantages are quite obvious:
Smaller code base for individual microservice. Easier to reason about it.
Ability to select programming tools best for particular use case.
Easier to scale.
Looking at the current code I noticed that Mongoose ODM (Object Document Mapper) models are used across all the project to create, query and update models in database. As a principle of a good programming all such interactions with database should be abstracted. Business logic should not leak into other system layers. I could do that by introducing REPOSITORY pattern (Domain Driven Design). While code is still being shared across web api and it's background workers it is not a hard task to do.
If i decide to extract repositories into standalone microservices than all bunch of problems arise:
Some sort of query language must be introduced to accommodate complex search queries.
Interface must provide a way to iterate over search results (cursor based navigation) without returning all database documents over network.
Since project is in it's early stage and I am the only developer, going to microservices based architecture seems like an overkill. Maybe there are other approaches I should consider?
Extracting business logic and interaction with database into separate repository and sharing among services to avoid complex communication protocols between services?
Based on my experience with working in Microservices for last few years, it seems like an overkill in current scenario but pays off in long-term.
Based on the information stated above, my thoughts are:
Code Structure - Microservices Architecture (MSA) applying in above context means not separating DAO, Business Logic etc. rather is more on the designing system as per business functions. For example, if it is an eCommerce application, then you can shipping, cart, search as separate services, which can further be divided into smaller services. Read it more about domain-driven design here.
Deployment Unit - Keeping microservices apps as an independent deployment unit is a key principle. Hence, keep a vertical slice of the application and package them as Docker Image with Application Code, App Server (if any), Database and OS (Linux etc.)
Communication - With MSA, communication between services become a key and hence general practice is to remain with the message-oriented approach for communication (read about the reactive system and reactive programming for more insight).
PaaS Solution - There are multiple PaaS solutions available, which you can apply so that you don't need to worry about all the other aspects like container management, container orchestration, auto-scaling, configuration management, log management and monitoring etc. See following PaaS solutions:
https://www.nanoscale.io/ by TIBCO
https://fabric8.io/ - by RedHat
https://openshift.io - by RedHat
Cloud Vendor Platforms - AWS, Azure & Google Cloud all of them have specific support for Microservices App from the deployment perspective, which we can use as an alternative solution if you don't want to deploy PaaS solution in your organization.
Hope these pointers will have in understanding the overall landscape so that you can structure your architecture for future need.
I am working on a monolith system... My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
In what ways do you need to evolve the project? Will it be mostly bugfixes, adding features, improving performance and/or scalability? Do you anticipate other developers collaborating in the future? Are you currently having maintenance issues? The answers to these questions (and many more) should be considered in guiding your choices.
You seem to be doing your homework around the pros and cons of a microservice architecture, so if you haven't asked yourself why you're even doing this in the first place, now would be good time to do so.
Maybe there are other approaches I should consider?
There's always the good old don't-break-what's-going ;)

Monolithic (vs) Micro-services ==> Threads (vs) Process

I have a monolithic application with single process having 5 threads. Each thread accomplishes certain specific task. Thinking to move this application to microservices using dockers. If I look at the architecture, each worker thread would become a docker process. So, in some way Monolithic vs Microservices becomes more like Thread vs Process discussion in my case.
The original thinking of having the monolithic was to have threads for performance and share the same memory. Now with microservices arch, I am pushed to a process model that may not suit from performance point of view.
I am kind of stuck on how to approach this problem.
What you are missing here is that microservices is not suitable for any software system in the world! Think about the drivers for migrating your current monolithic system to microservices before doing anything. Are you seeking for high availability and scalability? Do you want to have freedom for writing each thread in different programming languages? Is your system that complicated that could not be comprehended in a monolithic style? and finally, are you ready for paying the expenses of having a microservices style?
Microservices brings in many complexities to the system and may cause performance penalties in favor of higher scalability due to chattiness of services. If performance is an important concern, the system is not that large, and your answer to most of the above questions is "NO", I strongly suggest that you do not go for microservices style. Instead, try to modularize your current code base and refactor the code for better quality and comprehensibility.
Regarding Docker, you can use it even with the monolithic style in order to remove some of the deployment barriers and inconsistency in the development and the deployment environments. If the mentioned issues around deployment do not bother you, do not go for docker either since it will be just a layer of computational overhead.
Microservice will gain your application a more power , but this depend how size your project , what is the degree of the availability do you need , Do you have a lot of teams , a lot of languages and extra
Microservice for some project will be over kill and this can be handled within multithreading , so you can think about your vision before to migrate to this Architecture ,

High Scalability in Domain-Driven Design

I'm using DDD for a service-oriented application intended to transmit a high volume of messages between a high volume of web clients (i.e., browsers).
Because in the context of required functionality, the need for transmission outweighs the need for storage, I love the idea of relying on RAM primarily and minimizing use of the database.
However I'm unclear on how to architect this from a scalability point of view. A web farm creates high availability of service endpoints and domain logic processing. But no matter how many servers I have, it seems they must all share a common repository so that their data is consistent.
How do I build this repository so that it's as scalable as possible? How can it be splashed across an array of physical machines in a manner such that all machines are consistent and each couldn't care less if another goes down?
Also since touching the database will be required occasionally (e.g., when a client goes missing and messages intended for it must be stored until it returns), how should I organize my memory-based code and data access layer? Are they both considered "the repository"?
There are several ways to solve this issue. No single answer can really cover it all...
One method to ensure your scalability is to simply scale the hardware. Write your web services to be stateless so that you can run a web farm (all running the same identical services, pointing to the same DB) and turn your DB into a cluster. Clustered databases run over multiple servers and work on the same storage. However, this scenario can get complicated and expensive quite quickly.
Some interesting links:
http://scale-out-blog.blogspot.com/2009/09/future-of-database-clustering.html
http://en.wikipedia.org/wiki/Server_farm
Another method is to look at architecture. CQRS is a common architectural model that ensures scalability. For instance, this architecture model -- its name stands for Command/Query Responsibility Segregation -- builds different databases for reading and writing. This seems contradictory, but if you study it, it becomes natural and you wonder why you've never thought of it before. Simply put, most apps do a lot more reading than writing, and writing tends to be a lot more complicated than reading (requiring business rule validation etc.), so why not separate the two? You can use your expensive transactional database for writing and then your cheap, maybe Non-SQL based or open source, database over multiple reading servers. Your read model is then optimized for the screens of your application(s), whereas the write model is optimized solely for writing and is in fact a DDD-based set of repositories.
There's just not enough room here to cover this option in detail, but CQRS is a good way of achieving scalability and even ease of development, once you have a CQRS framework in place. There are many other advantages to CQRS, such as ease of auditing (if you combine it with the "event sourcing" technique, which is common in CQRS-based environments).
Some interesting links:
http://cqrsinfo.com
http://abdullin.com/cqrs
http://blog.fossmo.net/post/Command-and-Query-Responsibility-Segregation-(CQRS).aspx
Are you ready for some reading? There are a lot of options, but I believe you should start by learning about the advantages of modern distributed NoSQL dbs, and enjoy learning from the experience learned in facebook, linkedin and other friends. Start here:
http://highscalability.com/
http://nosql-database.org/

DDD vs N-Tier (3-Tier) Architecture

I have been practicing DDD for a while now with the 4 distinct layers: Domain, Presentation, Application, and Infrastructure. Recently, I introduced a friend of mine to the DDD concept and he thought it introduced an unnecessary layer of complexity (specifically targeting interfaces and IoC). Usually, its at this point, I explain the benefits of DDD-- especially, its modularity. All the heavy lifting and under the hood stuff is in the Infrastructure and if I wanted to completely change the underlying data-access method, I could do so with only having to touch the Infrastructure layer repository.
My friend's argument is that he could build a three tiered application in the same way:
Business
Data
Presentation
He would create business models (like domain models) and have the repositories in the Data layer return those Business models. Then he would call the business layer which called the data layer. I told him the problem with that approach is that it is not testable. Sure, you can write integration tests, but you can't write true unit tests. Can you see any other problems with his proposed 3-tiered approach (I know there is, because why would DDD exist otherwise?).
EDIT: He is not using IoC. Each layer in his example is dependent on one another.
I think you're comparing apples and oranges. Nothing about N-Tier prohibits it from utilizing interfaces & DI in order to be easily unit-tested. Likewise, DDD can be done with static classes and hard dependencies.
Furthermore, if he's implementing business objects and using Repositories, it sounds like he IS doing DDD, and you are quibbling over little more than semantics.
Are you sure the issue isn't simply over using DI/IoC or not?
I think you are mixing a few methodologies up. DDD is Domain-Driven Developement and is about making the business domain a part of your code. What you are describing sounds more like the Onion Architecture (link) versus a 'normal' 3-layered approach. There is nothing wrong with using a 3-layered architecture with DDD. DDD depends on TDD (TestDriven Developement). Interfaces help with TDD as it is easier to test each class in isolation. If you use Dependency Injection (and IoC) it is further mitigated.
The Onion Architecture is about making the Domain (a.k.a. business rules) independent of everything else - ie. it's the core of the application with everything depending on the business objects and rules while things related to infrastructure, UI and so on are in the outer layers. The idea is that the closer to the 'shell of the onion' a module is - the easier it is to exchange for a new implementation.
Hope this clears it a bit up - now with a minor edit!
Read "Fundamentals of Software Architecture: An Engineering Approach", Chapter 8, Page 100 to 107.
The top-level partitioning is of particular interest to architects because it defines the fundamental architecture style and way of partitioning code. It is one of the first decisions an architect must make. These two styles (DDD & Layered) represent different ways to top-level partition the architecture. So, you are not comparing apples and oranges here.
Architects using technical partitioning organize the components of the system by technical capabilities: presentation, business rules, persistence, and so on.
Domain partitioning, inspired by the Eric Evan book Domain-Driven Design, which is a modeling technique for decomposing complex software systems. In DDD, the architect identifies domains or workflows independent and decoupled from each other.
The domain partitioning (DDD) may use a persistence library and have a separate layer for business rules, but the top-level partitioning revolves around domains. Each component in the domain partitioning may have subcomponents, including layers, but the top-level partitioning focuses on domains, which better reflects the kinds of changes that most often occur on projects.
So you can implement layers on each component of DDD (your friend is doing the opposite, which is interesting and we might try that out as well).
However, please note that ("Fundamentals of Software Architecture: An Engineering Approach", Page 135)
The layered architecture is a technically partitioned architecture (as
opposed to a domain-partitioned architecture). Groups of components,
rather than being grouped by domain (such as customer), are grouped by
their technical role in the architecture (such as presentation or
business). As a result, any particular business domain is spread
throughout all of the layers of the architecture. For example, the
domain of “customer” is contained in the presentation layer, business
layer, rules layer, services layer, and database layer, making it
difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered
architecture style.
Everything in architecture is a trade-off, which is why the famous answer to every architecture question in the universe is “it depends.” Being said that, the disadvantage with your friend's approach is, it has higher coupling at the data level. Moreover, it will creates difficulties in untangling the data relationships if the architects later want to migrate this architecture to a distributed system (ex. microservices).
N Tier or in this case 3-tier architecture is working great with unit tests .
All you have to do is IoC (inversion of control) using dependency injection and repository pattern .
The business layer can validate , and prepare the returned data for the presentation \ web api layer by returning the exact data which is required .
You can then use mock in you unit tests all the way through the layers.
All your business logic and needs can be on bl layer . And Dal layer will contain repositories injected from higher level.

Resources