Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am confused about how should I deploy my micro-services in azure? I did deploy by creating couple of app services for each of the micro-services. I deployed using the ARM template. It is becoming very costly to deploy every micro-service in different app service and very difficult to manage all these services. Another approach which i was thinking is create one service and could have deployed under one app service but it would be again monolithic kind of web API.
Recently, I got to know through one of the blogs that to deploy micro-services you should use Azure Service fabric.
I want to understand which way I should opt out of below options-
One app service.
Multiple micro-service in different app service.
containerization with kubernetes(or other orchestrator)
azure service fabric?
Any other option which you suggest.
I am really confused about these. Please help me.
Thanks in Advance!!!
I'd highly recommend starting with the Azure Architecture Guide which will give you a solid big-picture overview. From there, you could take a look at the microservice-specific guidance.
To provide a very short, incomplete answer to your question, App Services are a unit of scale. If you're building a small service that focuses on one domain, and all of your functionality can scale together, you may be better off with one application hosted on one App Service. Know your domain first; don't split things up just to have microservices.
To choose which Azure compute service to use, this decision tree is very helpful.
Microservices are not only a solution to technological problems. They are also a solution to an organizational scalability problem. In the other hand, Microservices are really hard to manage, that is why usually they cannot be implemented without DevOps techniques to help to solve this problem.
I am saying all this because you wrote that they are becoming hard to manage, and it might be that the problem is not technical, but instead you don't have the right org structure and processes to handle microservices.
Microservices work well if the teams that builds it, runs it. And that includes deployment, support, etc. You should not have one person/team "handling" and deploying other's teams microservices, because as you discovered, microservices are really hard to manage
From a pure technological point of view, you need to clarify some stuffs:
How many microservice
How many teams/developers
What technologies are your microservices built in
How chatty these microservices are between each other
From those 4 questions, you probably end up in App Services if you have small amount of microservices and they do not need more than 10 instances each, are built in one of the supported technologies and your microservices are not super chatty to each other.
I would use AKS if you have a lot of microservices and many teams, so it is worth it to have a small platform team getting expert in kubernetes (not in charge of the deployments!)
I would recommend you to go through these links
Martin Fowler: https://martinfowler.com/microservices/
DevOps at Microsoft: https://www.youtube.com/watch?v=OwiT59e0kB4&t=349s
Why not to do Microservices: https://segment.com/blog/goodbye-microservices/
Million microservices talks in youtube like this: https://www.youtube.com/watch?v=MrV0DqTqpFU
Related
I am looking to use GCP for a micro-services application. After comparing AWS and GCP I have decided to go with Google because one major requirement for the project is to schedule tasks to run in the future (Cloud Tasks) which AWS does not seem to offer an equivalent of.
I am planning on containerizing my services and deploying to GCP using Cloud Run with a Redis cluster running as well for caching.
I understand that you cannot have multiple Firestore instances running in one project. Does this mean that all if my services will be using the same database?
I was looking to follow a model (possible on AWS) where each service had its own database instance that it reached out to.
Is this pattern possible on GCP?
Firestore indeed is for the moment limited to a single database instance per project. For performance that is usually not a problem, but for isolation such as your use-case, that can indeed be a reason to look elsewhere.
Firebase's original Realtime Database does allow multiple instances per project, and recently added a REST API for provisioning database instances. Since each Google Cloud Project can be toggled to also be a Firebase project, you could consider that.
Does this mean that all if my services will be using the same database?
I don't know all details of your case. Do you think that you can deploy a "microservice" per project? Not ideal, especially if they are to communicate using PubSub, but may be an option. In that case every "microservice" may get its own Firestore if that is a requirement.
I don't think one should consider GCP project as some kind of "hard boundaries". For me they are just another level of granularity - in addition to folders, etc.
There might be some benefits for "one microservice - one project" appraoch as well. For example, less dependent lifecycles, better (more accurate) security, may be simpler development workflows...
I am working on a monolith system. All of it's code is in one repository (Web API and background workers). System is written in Nodejs and MongoDB (Mongoose) is used as a data store. My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
Monolith architecture creates some problems:
If my background workers needs to scale. I have to deploy all the project to the server despite only using a small fraction of it.
All system must be redeployed when code changes. What if payment processor calls webhook while system is being redeployed?
Using microsevices advantages are quite obvious:
Smaller code base for individual microservice. Easier to reason about it.
Ability to select programming tools best for particular use case.
Easier to scale.
Looking at the current code I noticed that Mongoose ODM (Object Document Mapper) models are used across all the project to create, query and update models in database. As a principle of a good programming all such interactions with database should be abstracted. Business logic should not leak into other system layers. I could do that by introducing REPOSITORY pattern (Domain Driven Design). While code is still being shared across web api and it's background workers it is not a hard task to do.
If i decide to extract repositories into standalone microservices than all bunch of problems arise:
Some sort of query language must be introduced to accommodate complex search queries.
Interface must provide a way to iterate over search results (cursor based navigation) without returning all database documents over network.
Since project is in it's early stage and I am the only developer, going to microservices based architecture seems like an overkill. Maybe there are other approaches I should consider?
Extracting business logic and interaction with database into separate repository and sharing among services to avoid complex communication protocols between services?
Based on my experience with working in Microservices for last few years, it seems like an overkill in current scenario but pays off in long-term.
Based on the information stated above, my thoughts are:
Code Structure - Microservices Architecture (MSA) applying in above context means not separating DAO, Business Logic etc. rather is more on the designing system as per business functions. For example, if it is an eCommerce application, then you can shipping, cart, search as separate services, which can further be divided into smaller services. Read it more about domain-driven design here.
Deployment Unit - Keeping microservices apps as an independent deployment unit is a key principle. Hence, keep a vertical slice of the application and package them as Docker Image with Application Code, App Server (if any), Database and OS (Linux etc.)
Communication - With MSA, communication between services become a key and hence general practice is to remain with the message-oriented approach for communication (read about the reactive system and reactive programming for more insight).
PaaS Solution - There are multiple PaaS solutions available, which you can apply so that you don't need to worry about all the other aspects like container management, container orchestration, auto-scaling, configuration management, log management and monitoring etc. See following PaaS solutions:
https://www.nanoscale.io/ by TIBCO
https://fabric8.io/ - by RedHat
https://openshift.io - by RedHat
Cloud Vendor Platforms - AWS, Azure & Google Cloud all of them have specific support for Microservices App from the deployment perspective, which we can use as an alternative solution if you don't want to deploy PaaS solution in your organization.
Hope these pointers will have in understanding the overall landscape so that you can structure your architecture for future need.
I am working on a monolith system... My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
In what ways do you need to evolve the project? Will it be mostly bugfixes, adding features, improving performance and/or scalability? Do you anticipate other developers collaborating in the future? Are you currently having maintenance issues? The answers to these questions (and many more) should be considered in guiding your choices.
You seem to be doing your homework around the pros and cons of a microservice architecture, so if you haven't asked yourself why you're even doing this in the first place, now would be good time to do so.
Maybe there are other approaches I should consider?
There's always the good old don't-break-what's-going ;)
In theory I understand how Microservices work and why they can be helpful in various cases but I still don´t get how it works in practice.
Let´s say there´s an online shop based on a CMS as a monolith application.
And there´s now the need to run the online shop in a MIcroservices architecture.
How would this Microservices architecture differ technically from the current, monolith, architecture?
For example, I pick out the productsearch.php. If i want to scale this function, normally I had to set up a new server and copy the whole CMS ressources folder to it for loadbalancing.
And with Microservices, productsearch.php would be a single Microservice I guess, and I would have to just copy this php file to scale without the need to copy other ressources?
I have tried to explain it using this diagram of a fictitious CMS. With micro services architecture, we can independently scale each micro service. Each micro service may be developed by a different team, they may be even developed using different technology. But we great flexibility comes great maintenance overhead, I believe it is worth it as most of it can be automated.
Put simply, each module in a molithic application is a potential candidate for microservice. Howerver, microservices can be more granular than a traditional module.
This provides a good job at explaining how to decompose your monolithic application. http://microservices.io/patterns/decomposition/decompose-by-business-capability.html
Technically and conceptually, a microservice is independent of other services (where in a monolith you'd have modules with inter-dependencies).
Technically, a microservice built on modern microservices platforms (such as Node.JS, Spring Boot or .NetCore) will be more easily able to take advantages of containerization systems (such as Docker), perhaps supported by service registry and configuration management technologies (such as Kubernetes, ZooKeeper, Eureka and so on).
The advantage of containerization is that it'll be easier to scale-out (add more containers). Going further, the whole microservice / containerization concepts, and related technologies, also help enable things like CI/CD.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
There are a lot of IoT platform in the market like AWS Amazon IoT and Microsoft Azure Hub, I understand all the features that are offered by those platforms.
Questions:
Couldn't I implement all those features on a normal web application which can handle communication and all these features and run this application on a cluster of unmanaged server and have the same result?
When shall I use a normal web application and when shall I use IoT platform?
Of course you can implement your own IoT hub on any web application and cloud (or on-prem) platform, there is nothing secret or proprietary in those solutions. The question is, do you want to do that? What they offer is a lot of built in functionality that would take you some serious time to get production ready when building it yourself.
So:
1) yes, you can build it. Let's compare it to Azure IoT hub and look at what that contains:
a) reliable messages to and from hub
b) periodic health pulses
c) connected device inventory and device provisioning
d) support for multiple protocols (eg HTTP, AMQP, MQTT...)
e) access control and security using tokens
.... and more. Not supposed to be a full feature list here, just to illustrate that these solutions contains a whole lot of functionality, which you may (or may not) need when building your own IoT solution.
2) when does it make sense to build this yourself? I would say when you have a solution where you don't really neeed all of that functionality or can easily build or setup those parts you need yourself. Building all of that functionality doesn't, generally speaking, make sense, unless you are building your own IoT platform.
Another aspect is the ability to scale and offer a solution for multiple geographic locations. A web application on a cloud provider could easily be setup to both autoscale and cover multiple regions, but it is still something you would have to setup and manage yourself. It would likely also be more expensive to provide the same performance as the platform services does, they are built for millions of devices across a large number of customers, their solution will likely look different under the hood.
Third is time-to-market, by going with a platform service will get you up and running with your IoT solution fairly quick as opposed to building it yourself.
Figure out what requirements you want to support, how you want to scale, how many devices and so on. Then you can do a simple comparison of price and also what it would cost you to build the features you need.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am trying to evaluate different web service frameworks for API development in .Net. So far the frameworks I've been looking at are:
ServiceStack
MVC
Web API
NancyFx
I'm trying to find some common talking-points between the frameworks so I know what to look for when picking a framework. The talking points I've got so far are:
The Framework beliefs and principles
The Architecture of the framework (Client and Service side)
The Stack the framework provides you with
The Ease of development within the stack (plugins etc)
End-to-end performance benchmarks
Scalability benchmarks
Framework documentation availability
Framework Support (Cross platform etc)
Pricing
Overall Conclusion
Can anyone think of anything else I should think about? By the end of the research I'm hoping to write about each framework in detail and to make comparisons as to which framework to chose for a given purpose. Any help would be greatly appreciated.
End to End Productivity - The core essence for a Service is to provide a Service that ultimately delivers some value to its consumers. Therefore the end-to-end productivity of consuming services should also be strongly considered as the ease of which Services can be consumed from clients and least effort to consume them, ultimately provides more value to clients which is often more valuable than the productivity of developing Services themselves since the value is multiplied across its multiple consumers. As many services constantly evolve, the development workflow of updating Services and how easy it is to determine what's changed (i.e. if they have a static API) also impacts productivity on the client.
Interoperability - Another goal of a Service is interoperability and how well Services can be consumed from heterogeneous environments, most Web Service Frameworks just do HTTP however in many cases in Intranet environments sending API requests via a MQ is more appropriate as it provides greater resilience than HTTP, time-decoupling, natural load-balancing, decoupled endpoints, improved messaging workflows and error recovery, etc. There are also many Enterprises (and Enterprise products) that still only support or mandate SOAP so having SOAP endpoints and supporting XSD/WSDL metadata can also be valuable.
Versionability - Some API designs are naturally better suited to versioning where evolving Services can be enhanced defensively without breaking existing Service Consumers.
Testability and Mockability - You'll also want to compare the ease of which Services can be tested and mocked to determine how easy it is to create integration tests and whether it requires new knowledge and infrastructure as well as how easy it supports parallel client development which is important when front and backend teams develop solutions in parallel where the API contracts of a Service can be designed and agreed upon prior to development to ensure it meets the necessary requirements before implementation, then the front and backend teams can implement them independently of each other. If the Services haven't been implemented the clients would need to "mock" the Service responses until they have, then later switch to use the real services once they've been implemented.
Learnability how intuitive it is to develop Services, the amount of cognitive and conceptual overhead required also affects productivity and the ability to reason about how a Service Framework works and what it does has an impact on your Solutions overall complexity and your teams ability to make informed implementation decisions that affect performance and scalability and the effort it takes to ramp up new developers to learn your solution.