I have an idea of social network website and as I'm currently learning web development I thought that was a great idea to practice. I already worked out the business logic and front-end design with React. Now it's time for backend and I'm struggling.
I want to create a React+Nodejs event-driven app. It seems logical to use Kafka right away. I looked through various Kafka architecture examples and have several questions:
Is it possible to create an app that uses Kafka for data through API calls from Nodejs and to React and vice versa. And user relational database only for longterm storage?
Is it better to use Kafka to handle all events but communicate with some noSQL database like Cassandra or HBase. Then it seems that NodeJS will have to make API calls to them and send data to React.
Am I completely missing the point and talking nonsense?
Any answer to this question is going to be quite opinion based, so I'm just going to try to stick to the facts.
It is totally possible to create such an application. Moreover, Kafka is basically a distributed log, so you can use it as an event store and build your state from that.
That mainly depends on your architecture, and there are too many gaps here to answer this with any certainty - what kind of data are you saving? What kind of questions will you need answered? What does your domain model look like? You could use Kafka as a store, or as a persistent messaging service.
I don't think you're off the mark, but perhaps you're going for the big guns when in reality you don't really need them. Kafka is great for a very large volume of events going through. If you're building something new, you don't have that volume. Perhaps start with something simpler that doesn't require so much operational complexity.
Related
I created a Realtime application. It will store lot's of data in locally once node will crash all data lost.
how to carry these data and rejoin all the clients in same session or same room. Please let me know.
libraries are,
Socket.io,(version 4.2);
Node version 16.14
well, your question seems simple but it is actually kind of complicated to answer with so little information at hand...
there are a few aspecs to consider such as project architecture, data update times, data availability, data reliability, etc....
but to keep it short, basing my answer on the sole premise that you need something to store your data outside of nodejs, yet to be fast...
I'd recommend using Redis or Kafka, both have their pro and cons, both are meant for different needs, but both of are an in-memory data storage solution.
Hope my answer helped you, and set you in the right direction.
I have two separate cloud-based APIs that I am working on integrating together. Neither software directly talks to each other so I am creating something in the middle to get them to communicate. I have had trouble finding examples or documentation on how exactly to do this, does anyone know of any resources that could help me out?
My plan going in was to use a MERN Stack, running on a local server to do GET and POST requests to both APIs, use some mapping and logic to transpose the data into the correct format and send it to the other software. I do not have a client per se (other than myself) on my end, so I really will be skipping the React part of MERN, at least that is what I'm thinking. I'll be using Mongo to keep track of both sets of data for redundancy. I also considered using a LAMP Stack but felt that MERN would be faster in handling the data, and Mongo is more flexible in handling different data formats. If there is another process or technology that could help me that I'm not thinking of, I would be grateful to hear about it.
Has anyone encountered something like this before? Thank you.
As with most architecture questions, there's no completely right or wrong answer here. You could certainly design a well-built system to handle for this purpose with either stack; even more-so when you mention that your front-end framework is not an important consideration. Instead, ask yourself questions like this:
Which stack do you have more experience with, and is this an appropriate time to learn a new set of technologies, or is it important to do the best work you're capable of right now (how important is time, cost, or quality in this case)?
Another generalization I'll stick my neck out for is a data-first approach; what sort of data are you dealing with from each cloud integration, and what kind of data do you need to support and/or create in order to make your system work? Mongo, being a NoSQL persistence layer, will allow you to change your data model and handle more varied data in a quicker and easier manner than a SQL solution will. This is a double-edged sword, however, as lack of validation and a strongly-constrained (typed?) data model will make your application harder to work with and debug as it grows. In short - how big might this application grow?
If you have a handy and familiar way to manage the three different data models you're dealing with (cloud service 1, cloud service 2, and your app) via MySQL, then that's a compelling reason to use it. However, if your style is to start dumping data into your database and you're comfortable with a more iterative approach (which may require more, albeit shorter rounds of refactoring), then Mongo with MERN may be the preferable choice.
Finally, will others ever be working on this application? If so, which language would you prefer to be dealing with them upon - PHP or Javascript?
I have a couple of questions that exist around micro service architecture, for example take the following services:
orders,
account,
communication &
management
Question 1: From what I read I understand that each service is suppose to have ownership of the data pertaining to that service, so orders would have an orders database. How important is that data ownership? Would micro-services make sense if they all called from one traditional database such that all data pertaining to the services would exist in one database? If so, are there an implications of structuring the services this way.
Question 2: Services should be able to communicate with one and other. How would that statement be any different than simply curling an existing API? & basing the logic on that response? Is calling a service more efficient than simply curling the API?
Question 3: Is it worth it? Now I understand this is a massive generality , and it's fundamentally predicated on the needs of the business. But when that discussion has been had, was the re-build worth it? & what challenges can you expect to face
I will try to answer all the questions.
Respect to all services using the same database. If you do so you have two main problems. First the database would become a bottleneck because all requests will go to the same point. And second you will have coupled all your services, so if the database goes down or it needs to update, all your services will be affected. (The database will became a single point of failure)
The communication between services could be whatever your services need (syncrhonous, asynchronous, via message passing (message broker), etc..) it all depends on the use cases you have to support. The recommended way to do to avoid temporal decoupling is to use a message broker like kafka, doing this your services don't have to known each other and in case some of them go down the others will still working. And when they are up again, they can continue to process the messages that have pending. However, if your services need to respond in synchronous way, you can define synchronous communication between services and use a circuit breaker to behave properly in case the callee service is down.
Microservices architecture is far more complicated to make it work, to monitoring and to debug than a traditional monolith architecture so, it is only worth if you will have very large requirements of scalability and availability and/or if the system is very large and it will require several teams working in different parts of the system and it is recommendable to avoid dependencies among them. So each team can work at their own pace deploying their own services
I want to know following things so that I can fix my server architecture and make it more flexible.
Is it good to store home feed data [ex: Facebook homeFeed] to the variable for future manipulation or just fetch data related to homeFeed and manipulate everything which needs to be done on run time.
Please note that data set of home feed can contain anything. [ not developed yet ]
Is there any limit to request to MongoDB at any given time which can create a delay in data processing?
Are node.js and MongoDB a good option for social network development?
If you know anything related to social network development then please share the pros and cons.
Is it good to store home feed data [ex: Facebook homeFeed] to the variable for future manipulation or just fetch data related to homeFeed and manipulate everything which needs to be done on run time.
You can (and sometimes should) pre-compute home feed data for certain users (for example those who are the most active). You don't store that in a variable though, you cache the results with something like Redis.
Generating the home feed on a "request" basis is also possible and good.
Both approaches require careful thinking about your system's architecture, performance, scalability, robustness, fault-tolerance, etc...
Is there any limit to request to MongoDB at any given time. which can create a delay in data processing?
Yes. A MongoDB instance (or any other database) has limited resources. Look at the Sharding and Replication docs of MongoDB for more info about how to work with MongoDb at scale.
Are node.js and MongoDB are a good option for social network development.
Node.js and MongoDb are a good combinations for quick prototyping, you can get productive fairly quickly. Any language(s) you are familiair with is/are a good choice here, since your focus seems to be on architecture. Go, Java and PHP are good candidates too.
In the real world social networks are built with a lot more tools than that. Since the teams use various programming languages, databases and frameworks depending on the task at hand.
I'm a Rails developer who has just migrated to Node and I've decided to write an angular application backed by an postgres/express.js REST api. I use the api primarily for CRUD operations thus far, but I want to start a realtime game instance when two players visit a certain page(challenge each other). I'm thinking of using socket.io to accomplish the realtime functionality.
The game is similar to that of pokemon on gameboy, in which to players take turn performing certain actions until one of them wins.
I have the following questions:
Should I have a separate server to handle the game using socket.io, or can i use the same as the one my API operates on?
Should I use a service like Pusher or can I create the architecture myself?
How would I go about making sure no data is lost, if say, a player disconnects during a game?
At which point (number of concurrent connections/request per second) would I run into performance issues? 100, 1000, 10000?
Thanks
If the realtime logic is closely related to the CRUD stuff (i.e. realtime events are a direct result of writes to the API), and you expect somewhat equal usage of both aspects of the system, then I'd put both on the same server.
I highly recommend using a realtime push service if possible (disclaimer: I work for Fanout.io). It'll be simpler and probably less expensive too.
The key to making sure data is not lost is to persist it on the server before sending. Don't depend on the realtime layer for persistence (biggest mistake you can make). When the client reconnects, it can request data it may have missed via the normal API. So, just get your CRUD stuff correct and then layer realtime eventing on top. You can create a very network resilient service this way.
You should be able to get to a few hundred concurrent connections without much thought. Going beyond will take architecture planning. Of course, if you delegate to a push service then you don't have to worry about this, at least for the realtime part.