Keep a unified time on a P2P network - p2p

Im developing a P2P network, using Synapse for lazarus. I did almost everything, but im unable to find a way to keep a "global" time inside the network, since each user have its own local time (and timestamp) so when they sent a message to the network, i cant imagine how that message could contain a date/time format that every peer could understand. Any idea how it could be implemented?

Related

How to get live information with nodes

I need some advice or help, I am trying to create an ordering system like the one in McDonald's and I need a live feed of the current orders and being able to manipulate these in real time as well, however I am not sure how to do that other than sending a get request like every second however that can cause performance problems, is there any other way?
There is! It's called socket.io and is made for real-time communication.
In most cases, you have a server which manages the communication and also has a database. All clients will connect to this server and emit data or subscribe to events.
There are tons of tutorials out there. I recommend you to follow one and build a simple chat application, before you use it in your ordering application.

Whats the best way to share objects between webservers on different hosts

TLDR;
I am wondering what is the best way to reliably share an object or other data between n number of webservers on n number of machines?
I have looked at the likes of redis but it seems that this would not be what I am actually looking for here. I am now thinking that something like IPC over remote / RPC might be more appropriate? Is there a better way to do this given it will be called at minimum 10 times over a 30 second interval which can exponentially grow as the number of users running servers grows too.
Example & current use case:
I run a multiplayer mod for a game which receives a decent level of traffic and we are starting to notice cases where requests get dropped sometimes. The backend webserver is written in NodeJS and uses express in a couple of places too. We are in the process of restructuring the system and we have now come to restructuring the part of the system that handles a heartbeat from each server that members of the public host. This information is then shared out to the players so they can decide which server to join.
Based on my own research I am looking to host the service on several different machines for redundancy. These machines are then linked over vlan / vswitch so that they have a secure method to communicate with eachother. The database system is already setup to replicate this way however I cannot see a performance inclined way to handle the sharing of objects containing information about the servers that have communicated with each webhost.
If it helps the system works something like this:
Users server -> my load balancer -> webhost (backend).
Player -> my load balancer -> webhost (backend) returns info on all currently online servers.
In the example above and what is currently in use is a single instance webserver which handles the requests and processes needed.
Just an idea while the community proposes answers: consider reading about Apache Thrift. It is not such as IPC like, but more an RPC like. If the architecture of your servers, or the different components or the "backend network" is in "star"... with one "master" I shoud consider that possibility.
If the architecture of your backend is not like that... but a group of "independent" entities, it comes to my mind to solve the functionality with some "data bus" such as private MQTT broker and a group of members, subscribed or publishing data for the rest of the network. The most optimal serialization strategy for the object would be in my opinion Google Protobuf.
The integration of Mqtt with nodeJS is very simple, and if the weight of the packets is not too big, and you can admit some latency, I would really recommend you to make some tests using Mqtt with a publish/subscription QoS=2. It would not take great efforts to substitute de underlying communications library that you are using.
Once that is said, it seems that there's another solution: Kafka, that seems very interesting (I don't really know it).
Your choice will depend on the nature of your data, mostly: weight of the packets, frequency per user, and the latency you are willing to admit for the worst of the scenarios.

Real Time Tracking System

My client has 1000 taxi, he want to track every taxi location and see his display. My question is how to track all taxi information using driver mobile device. I am using mongoDb for database.
I plan to solve this problem using develop a api and mobile device send their location after 10 seconds. but the problem is server is very busy that time and can not working properly using api.
I saw firebase store client information realtime . I need to know its possible for me to work as like as firebase database using mongodb.
I am using nodejs for backend development . If anyone know any way how to store real time data please help
You cannot (generally speaking) track taxi in real time, the problem is your Internet connection may be poor due to low GPS signal, have really low latency sometimes, or even be down. Instead design two independent applications:
One, which will store current GPS location inside a FIFO queue locally
Second, which will flush the queue to a remote sever
This approach will ensure you will, eventually, receive all the positions without having to worry about dropped packets and other issues that may and will occur.
Instead of the TCP connection you can consider using UDP (or better DTLS) instead which is faster, but less reliable. If reliability is a must (doubt it if it is just a taxi), then go for TCP (or better TLS). How will you send or receive the data is just a detail.
Also make sure you authenticate the device before you store any data, especially if the connection between devices is not secure.
You can use Firebase as a real time database. Have a look at this link: https://github.com/firebase/geofire
But in case you want to go with MongoDB you can use MongoDB Geospatial Queries and use socket.io to enable real time behavior.
For more details: https://docs.mongodb.com/manual/geospatial-queries/
You can use https://socket.io/ for the real-time tracking system.
It is a JavaScript library for real-time web applications.
You just need to configure MongoDB.
There are many blogs which can explain to you how to set up socket.io with MongoDB.
Some of them are...
http://blog.slatepeak.com/creating-a-real-time-chat-api-with-node-express-socket-io-and-mongodb/
https://blog.feathersjs.com/building-a-rest-and-real-time-api-with-express-feathers-and-mongodb-12071e5417e1
I think you are going to implement the tracking in frontend
But that is not good way and not secure, because drivers can send the fake request real time
You can use websocket method to implement the checking real time of taxies
Please check this link
I guess this way is similar with your idea
I hope this way is good for you
Thanks

Options for getting a CPU intensive job off my web server?

I have been working on a Web App for visualizing live data. It is crucial that this data is kept up to date on the client side without such updates being invoked directly by the client (e.g. no button presses or refreshing the page). Currently, on page load, I grab the current data set from a database (DynamoDB) via Ajax, and subsequent updates are pushed to any listening clients every 5 minutes via a Websockets connection (using Socket.io).
I have overlooked the computational load of this update job. It has to mine some data, process it, update the database, and send the update out to all clients. As a result, the web server is left unresponsive for about 30 seconds with each update. Furthermore, my current architecture limits me from putting my server behind a load balancer, which is something I anticipate coming up in the future. For both these reasons, I really need to get this update job off my web server.
I am relatively inexperienced in web development, and I don't feel I am knowledgeable enough about these technologies to know the drawbacks of the solutions I have come up with. Currently, I am considering:
Break the update off into a separate process so it does not block the Node event loop. This would solve my issue in the short term, but if I ever want to load balance my application, I can't have the update running on multiple machines.
Drop Websockets entirely and just have the client query the database every 5 minutes, while a separate process (or separate server if I want load balancing) keeps the database up to date without interacting directly with the client. Will this kind of access pattern put too much load on my db?
Have a separate server run the update, and send the result via Websockets (or maybe some other protocol) to my load balanced application servers, which then push that update to all listening clients as usual. Is this even possible?
Perhaps there are other solutions. It seems like this would be a relatively common problem, so I was hoping I could find some guidance here. What are the potential issues with the solutions I have proposed, and are there other possible solutions that my suit my use case better?
It sounds like you want one process sitting somewhere which crunches the data and publishes it to a stream. Clients can then subscribe to the stream as and when they like. Redis handles streams nicely, you could process your data and push it into a redis stream. You could then create a small node service which subscribes to the redis stream and pushes the formatted data out over a websocket or via polling.
In this scenario you can then scale up either the publishing process (the one crunching the numbers) if your data load goes up, or scale up your subscribed process (which serves the data over a websocket to browsers) if you get an influx of clients watching the data.
You can also easily distribute the hosting of these services across other machines, and even write them in different languages if you decide the number crunching needs something like threading.
You're then left with the issue of clients (web browsers) consuming this data with a load balance in-between. This can be a hard problem if you use websockets and is bundled with pros and cons. But importantly you'll have separated your data crunching from your result publishing and that'll isolate out your issue to only the load balancing.
I have done pretty much the same to check ressources on some of our servers.
I have a C# service getting the information on each server that we manage, sending them to a queue (Amq).
From there, I have a stomp client fetching data from amq and emiting them to a websocket.
My main micro service is fetching the data to save them into a db.
My visualisation webapp is connected to the same ws and is fetching the data as they are sent to display them.
The Amq step isn't mandatory at all, it's just something I had to work with (historical).
I don't know what type of data your are working with, so I don't know if my solution can apply to you.
Don't hesitate if I'm not clear or you have any question.
This is a big question and I'm not going to try and give you a definitive answer.
For option 2
It really depends on how expensive your queries are. You can make DynamoDB fast if you pay for enough throughput. That said, on the face it, re-loading your whole dataset, when that sounds like its probably large, probably isn't good engineering.
For option 3
This option seems best to me if its achievable, although admittedly its hard to say with such a complex system - obviously you can't share your whole project.
Given your are already using AWS you might want to look into AWS Lambda. If you can move the update process into a stand alone job, you can host it on lambda and move the load off the web server. Lambda is essentially infinitely scalable and you only pay for the compute you use.
This really depends on you being able to split the update task off into a separate service. Its likely you would need a fair bit of refactoring to isolate it as a service. If you can break little bits off at a time, and make the move gradually, even better.
If you consider trying this, and you've not used Lambda before, I would definitely start small with some hello world examples. Then try a very simple service in your application, and build up to taking on the update service.
You might also consider looking in AWS Simple Message Queue Service to handle the comms between clients and server.
Database tuning
If a lot of your update time is spent waiting for database actions to complete, rather than server processing, you can consider tuning that side of things up. Things to consider are:
Buying more throughput
Using batch operations (as these move load to DynamoDB from your server)
Tuning keys, indexes and database access

Stream WebCam using socket.io

I have been trying to implement a web application that will be able to handle following scenario:
Streaming video/audio from a client to other clients (actually a particular set of them, no broadcasting) and server at the same time. The data source would be a webcam of the client.
This streamed data has to be displayed in the real time on the other clients' browser and be saved on the server side for the 'archiving' purposes.
It has to be implemented in node.js + socket.io environment.
To put it in some more specific context... The scenario is that there is a guy that makes a kind of a room for the users that he chooses. After the chosen users join the room, the creator starts streaming video/audio from his/her built in devices (webcam). All of the guests receive the data in real time, moreover the data is being sent to the server where it is stored so it can be recovered after the stream and room get closed.
I was thinking about mixing Socket.IO with WebRTC. In theory the combination of these two seem just perfect for the job.
The Socket.IO is great for gathering specific set of users by assigning some sockets to a room and for signaling process demanded by the WebRTC.
At the same time WebRTC is awesome for P2P connection between users gathered in the same room, it is also really easy to get access to the webcam and other built in devices that I might want to use.
So yeah, everything is looking pretty decent in theory but I would really need to see some code in action so I could actually try to implement it on my own. Moreover, I see some issues:
How do I save the stream that is sent by the P2P connection? Obviously server does not have access to that. I was thinking that I might treat the server as another 'guest', so it would be just another endpoint of the P2P connection with the creator of the room. Somehow it feels edgy, though.
Wouldn't it be better to treat server as the middleman between the creator and the clients? At one point there might be some, probably insignificant, delay comparing to P2P but presumably it would be the same for all the clients. (I tried that but I can't get the streaming from webcam to the server done, that's however is the topic for a different question as I am having problems with processing the MediaStream)
I was looking for some nice solutions but without any success. I have seen that there is this nice P2P solution made for socket.io: http://socket.io/blog/socket-io-p2p/ . The thing is - I don't think it will handle the data stream well. The examples mention only simple chat app and I need something a little bit heavier than that.
I would be really thankful for some specific examples, docs, whatever may lead me a little closer to the implementation of it as I really don't know how to approach it.
Thanks in advance :)
You task can be solved by using one of the open source WebRTC-servers.
For example, kurento.
You can realize schemas of stream:
One to one
One to many
Many to many
WebRtc-server schema
Clients will connect to each other through the WebRTC server.
So, on server side you can record the stream, or send it for transcoding.
webSocket is used for communicating with server.
You can find some examples according to your task
Video streaming to multiple users is a really hard problem that unfortunately requires extensive infrastructure to achieve. You will not be able to stream video data through a websocket. WebRTC is also not a viable solution for what you are describing because, as you mentioned, the WebRTC protocol is P2P, as in the streaming user will need to make a direct connection to all the 'viewers'. This will obviously not scale beyond a few 'viewers'. WebRTC is more for direct video calls like in Skype for example.
Here is an article describing the architecture used by a somewhat popular live streaming service. As you can see achieving live video at any sort of scale will require considerable resources.

Resources