Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
There are a lot of IoT platform in the market like AWS Amazon IoT and Microsoft Azure Hub, I understand all the features that are offered by those platforms.
Questions:
Couldn't I implement all those features on a normal web application which can handle communication and all these features and run this application on a cluster of unmanaged server and have the same result?
When shall I use a normal web application and when shall I use IoT platform?
Of course you can implement your own IoT hub on any web application and cloud (or on-prem) platform, there is nothing secret or proprietary in those solutions. The question is, do you want to do that? What they offer is a lot of built in functionality that would take you some serious time to get production ready when building it yourself.
So:
1) yes, you can build it. Let's compare it to Azure IoT hub and look at what that contains:
a) reliable messages to and from hub
b) periodic health pulses
c) connected device inventory and device provisioning
d) support for multiple protocols (eg HTTP, AMQP, MQTT...)
e) access control and security using tokens
.... and more. Not supposed to be a full feature list here, just to illustrate that these solutions contains a whole lot of functionality, which you may (or may not) need when building your own IoT solution.
2) when does it make sense to build this yourself? I would say when you have a solution where you don't really neeed all of that functionality or can easily build or setup those parts you need yourself. Building all of that functionality doesn't, generally speaking, make sense, unless you are building your own IoT platform.
Another aspect is the ability to scale and offer a solution for multiple geographic locations. A web application on a cloud provider could easily be setup to both autoscale and cover multiple regions, but it is still something you would have to setup and manage yourself. It would likely also be more expensive to provide the same performance as the platform services does, they are built for millions of devices across a large number of customers, their solution will likely look different under the hood.
Third is time-to-market, by going with a platform service will get you up and running with your IoT solution fairly quick as opposed to building it yourself.
Figure out what requirements you want to support, how you want to scale, how many devices and so on. Then you can do a simple comparison of price and also what it would cost you to build the features you need.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am confused about how should I deploy my micro-services in azure? I did deploy by creating couple of app services for each of the micro-services. I deployed using the ARM template. It is becoming very costly to deploy every micro-service in different app service and very difficult to manage all these services. Another approach which i was thinking is create one service and could have deployed under one app service but it would be again monolithic kind of web API.
Recently, I got to know through one of the blogs that to deploy micro-services you should use Azure Service fabric.
I want to understand which way I should opt out of below options-
One app service.
Multiple micro-service in different app service.
containerization with kubernetes(or other orchestrator)
azure service fabric?
Any other option which you suggest.
I am really confused about these. Please help me.
Thanks in Advance!!!
I'd highly recommend starting with the Azure Architecture Guide which will give you a solid big-picture overview. From there, you could take a look at the microservice-specific guidance.
To provide a very short, incomplete answer to your question, App Services are a unit of scale. If you're building a small service that focuses on one domain, and all of your functionality can scale together, you may be better off with one application hosted on one App Service. Know your domain first; don't split things up just to have microservices.
To choose which Azure compute service to use, this decision tree is very helpful.
Microservices are not only a solution to technological problems. They are also a solution to an organizational scalability problem. In the other hand, Microservices are really hard to manage, that is why usually they cannot be implemented without DevOps techniques to help to solve this problem.
I am saying all this because you wrote that they are becoming hard to manage, and it might be that the problem is not technical, but instead you don't have the right org structure and processes to handle microservices.
Microservices work well if the teams that builds it, runs it. And that includes deployment, support, etc. You should not have one person/team "handling" and deploying other's teams microservices, because as you discovered, microservices are really hard to manage
From a pure technological point of view, you need to clarify some stuffs:
How many microservice
How many teams/developers
What technologies are your microservices built in
How chatty these microservices are between each other
From those 4 questions, you probably end up in App Services if you have small amount of microservices and they do not need more than 10 instances each, are built in one of the supported technologies and your microservices are not super chatty to each other.
I would use AKS if you have a lot of microservices and many teams, so it is worth it to have a small platform team getting expert in kubernetes (not in charge of the deployments!)
I would recommend you to go through these links
Martin Fowler: https://martinfowler.com/microservices/
DevOps at Microsoft: https://www.youtube.com/watch?v=OwiT59e0kB4&t=349s
Why not to do Microservices: https://segment.com/blog/goodbye-microservices/
Million microservices talks in youtube like this: https://www.youtube.com/watch?v=MrV0DqTqpFU
I have a couple of questions that exist around micro service architecture, for example take the following services:
orders,
account,
communication &
management
Question 1: From what I read I understand that each service is suppose to have ownership of the data pertaining to that service, so orders would have an orders database. How important is that data ownership? Would micro-services make sense if they all called from one traditional database such that all data pertaining to the services would exist in one database? If so, are there an implications of structuring the services this way.
Question 2: Services should be able to communicate with one and other. How would that statement be any different than simply curling an existing API? & basing the logic on that response? Is calling a service more efficient than simply curling the API?
Question 3: Is it worth it? Now I understand this is a massive generality , and it's fundamentally predicated on the needs of the business. But when that discussion has been had, was the re-build worth it? & what challenges can you expect to face
I will try to answer all the questions.
Respect to all services using the same database. If you do so you have two main problems. First the database would become a bottleneck because all requests will go to the same point. And second you will have coupled all your services, so if the database goes down or it needs to update, all your services will be affected. (The database will became a single point of failure)
The communication between services could be whatever your services need (syncrhonous, asynchronous, via message passing (message broker), etc..) it all depends on the use cases you have to support. The recommended way to do to avoid temporal decoupling is to use a message broker like kafka, doing this your services don't have to known each other and in case some of them go down the others will still working. And when they are up again, they can continue to process the messages that have pending. However, if your services need to respond in synchronous way, you can define synchronous communication between services and use a circuit breaker to behave properly in case the callee service is down.
Microservices architecture is far more complicated to make it work, to monitoring and to debug than a traditional monolith architecture so, it is only worth if you will have very large requirements of scalability and availability and/or if the system is very large and it will require several teams working in different parts of the system and it is recommendable to avoid dependencies among them. So each team can work at their own pace deploying their own services
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
My client ask me to build a realtime application that could chat, send images and videos all in realtime. He asked me to come up with my own technology stack, so I did a lot of research and found out that the easiest one to build would be using below tech stack
1) Node.js and cluster to max out the CPU core for one instance of server - Language
2) Socket.io - realtime framework
3) Redis - pub/sub for multiple instances of server
4) Nginx - to reverse proxy and load balance multiple servers
5) Amazon EC2 - to run the server
6) Amazon S3 and CloudFront - to save the images/videos and to deliver
Correct me if I'm wrong for the above stack. My real question is, can the above tech stack scale 1,000,000 messages per seconds (text, images, videos)?
Anyone who have experienced with node.js and socket.io, could give me an insights or an alternatives of the above stack.
Regards,
SinusGob
My real question is, can the above tech stack scale 1,000,000 messages
per seconds (text, images, videos)?
Sure it can. With the right design and enough hardware. The question your client should be asking is really not whether it can be made to go that big, but at what cost and practicality can it be done and are those the best choices.
Let's look at each piece you've mentioned:
node.js - For an I/O centric app, it's an excellent choice for high scale and it can scale by deploying many CPUs in a cluster (both multi-process per server and multi-server). How practical this type of scale is depends a lot on what kind of shared data all these server processes need access to. Usually, the data store ultimately ends up being the harder bottleneck in scaling because it's easy to throw more servers at the request processing. It's not so easy to throw more hardware at a centralized data store. There are ways to do that, but it depends a lot on the demands of the app for how you do it and how hard it is.
socket.io - If you need efficient server push of smallish messages, then socket.io is probably the best way to go because it's the most efficient at push to the client. It is not great at all types of transport though. For example, I wouldn't be moving large images or video around through socket.io as there are more purpose built ways to do that. So, the use of socket.io depends a lot on what exactly the app wants to use it for. If you wanted to push a video to a client, you could also push just an URL and have the client turn around and request the video via a regular http URL using well known high scale technology.
Redis - Again, great for some things, not great at everything. So, it really depends upon what you're trying to do. What I explained earlier is that the design of your data store and the number of transactions through it is probably where your real scale problems lie. If I were starting this job, I'd start with an understanding of the data storage needs for a server, transactions per second of various types, caching strategy, redundancy, fail-over, data persistence, etc... and design the high scale access to data first. I wouldn't be entirely sure redis was the preferred choice. I'd probably suggest you need a high scale database guy as a consultant early in the project.
Nginx - Lots of high scale sites using nginx so it's certainly a good tool. Whether it's exactly the right tool for you depends upon your design. I'd probably work on this part last because it seems less central to the design and once the rest of the system is laid out, you can then consider what you need here.
Amazon EC2 - One of several possible choices. These choices are hard to compare directly in an apples to apples comparison. Large scale systems have been built out of EC2 so there is proof of concept there and the general architecture seems an appropriate match. If you wanted to know where the real gremlins are there, you'd need a consultant that had done high scale stuff on EC2.
Amazon S3 - I personally know some very high storage and bandwidth sites using S3 for both video and images. It works for that.
So ... these are generally likely good tools to use if they are used in the right way. Redis would be a question-mark depending upon the storage needs of the actual application (you've provided zero requirements and a database can't be selected with zero requirements). A more reasoned answer would be based on putting together a high level set of requirements that analyze what the system needs to be able to do to serve 1,000,000 whatever. Those requirements could be compared with known capabilities for some of these pieces to start a ballpark on scaling a system. Then, you'd have to put together some benchmarking tests to run some tests on certain pieces of the system. As much of the success of failure would depend upon how the app was built and how the tools were used as it would which tools were selected. You can likely make a successful scale with many different types of tools. Heck, Facebook runs on PHP (well, a highly modified, customized PHP that is not really typical PHP at all at runtime).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am trying to evaluate different web service frameworks for API development in .Net. So far the frameworks I've been looking at are:
ServiceStack
MVC
Web API
NancyFx
I'm trying to find some common talking-points between the frameworks so I know what to look for when picking a framework. The talking points I've got so far are:
The Framework beliefs and principles
The Architecture of the framework (Client and Service side)
The Stack the framework provides you with
The Ease of development within the stack (plugins etc)
End-to-end performance benchmarks
Scalability benchmarks
Framework documentation availability
Framework Support (Cross platform etc)
Pricing
Overall Conclusion
Can anyone think of anything else I should think about? By the end of the research I'm hoping to write about each framework in detail and to make comparisons as to which framework to chose for a given purpose. Any help would be greatly appreciated.
End to End Productivity - The core essence for a Service is to provide a Service that ultimately delivers some value to its consumers. Therefore the end-to-end productivity of consuming services should also be strongly considered as the ease of which Services can be consumed from clients and least effort to consume them, ultimately provides more value to clients which is often more valuable than the productivity of developing Services themselves since the value is multiplied across its multiple consumers. As many services constantly evolve, the development workflow of updating Services and how easy it is to determine what's changed (i.e. if they have a static API) also impacts productivity on the client.
Interoperability - Another goal of a Service is interoperability and how well Services can be consumed from heterogeneous environments, most Web Service Frameworks just do HTTP however in many cases in Intranet environments sending API requests via a MQ is more appropriate as it provides greater resilience than HTTP, time-decoupling, natural load-balancing, decoupled endpoints, improved messaging workflows and error recovery, etc. There are also many Enterprises (and Enterprise products) that still only support or mandate SOAP so having SOAP endpoints and supporting XSD/WSDL metadata can also be valuable.
Versionability - Some API designs are naturally better suited to versioning where evolving Services can be enhanced defensively without breaking existing Service Consumers.
Testability and Mockability - You'll also want to compare the ease of which Services can be tested and mocked to determine how easy it is to create integration tests and whether it requires new knowledge and infrastructure as well as how easy it supports parallel client development which is important when front and backend teams develop solutions in parallel where the API contracts of a Service can be designed and agreed upon prior to development to ensure it meets the necessary requirements before implementation, then the front and backend teams can implement them independently of each other. If the Services haven't been implemented the clients would need to "mock" the Service responses until they have, then later switch to use the real services once they've been implemented.
Learnability how intuitive it is to develop Services, the amount of cognitive and conceptual overhead required also affects productivity and the ability to reason about how a Service Framework works and what it does has an impact on your Solutions overall complexity and your teams ability to make informed implementation decisions that affect performance and scalability and the effort it takes to ramp up new developers to learn your solution.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a POC using Spring Core whose work is essentially determined by two application properties read from a file. We can scale this out by spinning up additional jvms (running same code base) and assigning different property values to each jvm so that they don't interfere with each other. This works to an extent, but I would like to make it more dynamic. I can kind of see how SI might be a fit here. I think I could create one application that queries the DB and figures out the work parameters and sends those out to the available instances of our application in kind of a round-robin fashion. But am having trouble seeing how to implement it technically. All the applications are running on the same machine, so they have the same IP address. Also, they are not web apps. Would I need to use JMS (which I am not familiar with) or can SI handle this?
You could use JMS, RabbitMQ, Redis, or any number of outbound endpoints to distribute the work.
Let's say you choose to use simple rmi or TCP/UDP; you can simply have a number of outbound endpoints subscribed to the routing channel and SI will round robin the requests (by default).
This would be statically configured though. You would need a little glue if you want to dynamically change the number of servers without using a broker such as JMS or RabbitMQ.
The dynamic FTP sample illustrates a technique of adding new destinations (in that case ftp servers) dynamically.