We have an application for iOS which has a chat feature. Currently it works with long poll. And now we are trying to modify it to work with sockets. When it comes to socket, we have started for a research and it seems that one of the best option is using nodejs with socket.io. Then we have used redis pub/sub to manage the message delivery and storage.
After a few researching on redis, the recommended usage suggests the stored data should fit on memory. But, we have a little big database. We would like to store the whole chat history. So we have started to plan to use redis as a cache database, which will store the online user's chat history (may be not whole of them) and write the actual conversation after getting offline from redis to mongodb/simpledb (or instantly both of them).
So as a summary, we are about to decide to use nodejs and redis pub/sub to deliver messages, redis as a cache database, and mongodb to store the whole conversation.
What do you think about the design? Is this acceptable? Or, if there is a better way you can suggest, can you please explain a little more?
Thanks in advance.
For a chat system, you're thinking big. If you think you're going to reach a million users then go for it. Consider also availability - how will your system deal with failure of a machine?
Related
I need some advice or help, I am trying to create an ordering system like the one in McDonald's and I need a live feed of the current orders and being able to manipulate these in real time as well, however I am not sure how to do that other than sending a get request like every second however that can cause performance problems, is there any other way?
There is! It's called socket.io and is made for real-time communication.
In most cases, you have a server which manages the communication and also has a database. All clients will connect to this server and emit data or subscribe to events.
There are tons of tutorials out there. I recommend you to follow one and build a simple chat application, before you use it in your ordering application.
My client has 1000 taxi, he want to track every taxi location and see his display. My question is how to track all taxi information using driver mobile device. I am using mongoDb for database.
I plan to solve this problem using develop a api and mobile device send their location after 10 seconds. but the problem is server is very busy that time and can not working properly using api.
I saw firebase store client information realtime . I need to know its possible for me to work as like as firebase database using mongodb.
I am using nodejs for backend development . If anyone know any way how to store real time data please help
You cannot (generally speaking) track taxi in real time, the problem is your Internet connection may be poor due to low GPS signal, have really low latency sometimes, or even be down. Instead design two independent applications:
One, which will store current GPS location inside a FIFO queue locally
Second, which will flush the queue to a remote sever
This approach will ensure you will, eventually, receive all the positions without having to worry about dropped packets and other issues that may and will occur.
Instead of the TCP connection you can consider using UDP (or better DTLS) instead which is faster, but less reliable. If reliability is a must (doubt it if it is just a taxi), then go for TCP (or better TLS). How will you send or receive the data is just a detail.
Also make sure you authenticate the device before you store any data, especially if the connection between devices is not secure.
You can use Firebase as a real time database. Have a look at this link: https://github.com/firebase/geofire
But in case you want to go with MongoDB you can use MongoDB Geospatial Queries and use socket.io to enable real time behavior.
For more details: https://docs.mongodb.com/manual/geospatial-queries/
You can use https://socket.io/ for the real-time tracking system.
It is a JavaScript library for real-time web applications.
You just need to configure MongoDB.
There are many blogs which can explain to you how to set up socket.io with MongoDB.
Some of them are...
http://blog.slatepeak.com/creating-a-real-time-chat-api-with-node-express-socket-io-and-mongodb/
https://blog.feathersjs.com/building-a-rest-and-real-time-api-with-express-feathers-and-mongodb-12071e5417e1
I think you are going to implement the tracking in frontend
But that is not good way and not secure, because drivers can send the fake request real time
You can use websocket method to implement the checking real time of taxies
Please check this link
I guess this way is similar with your idea
I hope this way is good for you
Thanks
I just started with Meteor app development and have a use case which I am not sure is good for meteor.
We have a java application that pushes data to redis at a very fast rate (data updates in less than 50 milliseconds) and we are building a web application (on NodeJS) which connects to this redis instance and sends the data to the client. For now (with native NodeJS app), we are sending data only twice a second (as we do not require such fast updates).
My question is, how can I achieve the same with Meteor? As we know Meteor has live-query which will tend to send data as soon as it changes, but this is not optimum for us. Is there a way to tune live-query to send data say only after a certain time?
Thanks
I think you are looking for ways to throttle meteors calls. This could be done with this library.
This issue has been also discussed here. Reading up on it I think they still haven't implemented it in core. This would make sense since there are no out-of-the-box throttling mechanisms in node or iojs.
Hope this was helpful.
I'm a Rails developer who has just migrated to Node and I've decided to write an angular application backed by an postgres/express.js REST api. I use the api primarily for CRUD operations thus far, but I want to start a realtime game instance when two players visit a certain page(challenge each other). I'm thinking of using socket.io to accomplish the realtime functionality.
The game is similar to that of pokemon on gameboy, in which to players take turn performing certain actions until one of them wins.
I have the following questions:
Should I have a separate server to handle the game using socket.io, or can i use the same as the one my API operates on?
Should I use a service like Pusher or can I create the architecture myself?
How would I go about making sure no data is lost, if say, a player disconnects during a game?
At which point (number of concurrent connections/request per second) would I run into performance issues? 100, 1000, 10000?
Thanks
If the realtime logic is closely related to the CRUD stuff (i.e. realtime events are a direct result of writes to the API), and you expect somewhat equal usage of both aspects of the system, then I'd put both on the same server.
I highly recommend using a realtime push service if possible (disclaimer: I work for Fanout.io). It'll be simpler and probably less expensive too.
The key to making sure data is not lost is to persist it on the server before sending. Don't depend on the realtime layer for persistence (biggest mistake you can make). When the client reconnects, it can request data it may have missed via the normal API. So, just get your CRUD stuff correct and then layer realtime eventing on top. You can create a very network resilient service this way.
You should be able to get to a few hundred concurrent connections without much thought. Going beyond will take architecture planning. Of course, if you delegate to a push service then you don't have to worry about this, at least for the realtime part.
The Setup:
Imagine a 'twitter like' service where a user submits a post, which is then read by many (hundreds, thousands, or more) users.
My question is regarding the best way to architect the cache & database to optimize for quick access & many reads, but still keep the historical data so that users may (if they want) see older posts. The assumption here is that 90% of users would only be interested in the new stuff, and that the old stuff will get accessed occasionally. The other assumption here is that we want to optimize for the 90%, and its ok if the older 10% take a little longer to retrieve.
With this in mind, my research seems to strongly point in the direction of using a cache for the 90%, and then to also store the posts in another longer-term persistent system. So my idea thus far is to use Redis for the cache. The advantages is that Redis is very fast, and also it has built in pub/sub which would be perfect for publishing posts to many people. And then I was considering using MongoDB as a more permanent data store to store the same posts which will be accessed as they expire off of Redis.
Questions:
1. Does this architecture hold water? Is there a better way to do this?
2. Regarding the mechanism for storing posts in both the Redis & MongoDB, I was thinking about having the app do 2 writes: 1st - write to Redis, it then is immediately available for the subscribers. 2nd - after successfully storing to Redis, write to MongoDB immediately. Is this the best way to do it? Should I instead have Redis push the expired posts to MongoDB itself? I thought about this, but I couldn't find much information on pushing to MongoDB from Redis directly.
It is actually sensible to associate Redis and MongoDB: they are good team players. You will find more information here:
MongoDB with redis
One critical point is the resiliency level you need. Both Redis and MongoDB can be configured to achieve an acceptable level of resiliency, and these considerations should be discussed at design time. Also, it may put constraint on the deployment options: if you want master/slave replication for both Redis and MongoDB you need at least 4 boxes (Redis and MongoDB should not be deployed on the same machine).
Now, it may be a bit simpler to keep Redis for queuing, pub/sub, etc ... and store the user data in MongoDB only. Rationale is you do not have to design similar data access paths (the difficult part of this job) for two stores featuring different paradigms. Also, MongoDB has built-in horizontal scalability (replica sets, auto-sharding, etc ...) while Redis has only do-it-yourself scalability.
Regarding the second question, writing to both stores would be the easiest way to do it. There is no built-in feature to replicate Redis activity to MongoDB. Designing a daemon listening to a Redis queue (where activity would be posted) and writing to MongoDB is not that hard though.