I'm trying to create a media streaming server which will stream images captured from camera to connected javascript/html clients.
Currently, I have developed a windows service which captures images and sends it to multiple clients through continuously polling, however, it lags in performance. For example, it congests the network with too much traffic, and creates delay in streams.
The service is running on Hyper V VM with 6 cores and 8 Gb memory.
Where can I find the lag? Any suggestion?
You must implement a queue or use something like SignalR. Looks for something like reduce to send traffic in network like ziping or cache
Look for Hangfire.io its good too
Related
I'm a part of building a service controlling Ev Charging Stations. The protocol is called OCPP and the underlying transport protocol is web sockets.
The stations are calling the host and the web socket is then being upheld, both server and client are initiating commands.
We have implemented the protocol and all that, the question we're looking into is how do we scale and host the web sockets (pressure load of with queue, etc is not the question).
We're currently on Azure and we have in our prototypes used Azure App Service which works fine, however, we have presently not looked into the limitations when it comes to scaling.
We have looked at Azure Pub Sub, however, it doesn't seem compatible with OCPP.
The question is what type of hosting we should look for to host the web sockets?
The documentation says that the number of Websockets is unlimited starting with the Standard plan but take a good look at the side note:
If you scale an app in the Basic tier to two instances, you have 350 concurrent connections for each of the two instances. For Standard tier and above, there are no theoretical limits to web sockets, but other factors can limit the number of web sockets. For example, maximum concurrent requests allowed (defined by maxConcurrentRequestsPerCpu) are: 7,500 per small VM, 15,000 per medium VM (7,500 x 2 cores), and 75,000 per large VM (18,750 x 4 cores).
I have created a channel on Azure Media Services, I correctly configured it as an RTMP channel and streamed a live video with Android + FFMpeg libraries.
The problem is the client end-point latency.
I need a maximum latency ~2 seconds, but now I have about ~25 seconds!
I'm using Azure Media Player in a browser page to stream the content.
Do you know a configuration of the client/channel which can reduce the latency?
Thanks
As you pointed , there are few factors which affects latency.
Total delay time =
time to push video from client to server
server processing time
latency for delivering content from server to client.
Check https://azure.microsoft.com/en-us/documentation/articles/media-services-manage-origins/#scale_streaming_endpoints to see how you can minimize #3 mentioned above by configuring cdn and scaling streaming end units.
Given these 3 components, i don't think at this stage you will be able archive less than 2 seconds end to end delay globally from Android client to browser client.
Easiest way to check latency is ffplay --fflags nobuffer rtmp:///app/stream_name
As I did in this video https://www.youtube.com/watch?v=Ci646LELNBY
Then if there's no latency by ffplay, it's the player that introduce latency
We're developing an InternetOfThings application. Actually we get the data from the device and send to EventHub on Azure.
We could have a lot of connection problems on the field, sometimes offline periods that can even last for days.
Is there a library to store and forward our messages and forward to eventhub when possible? We hear about rabbitmq but here the volume of data is huge (near 3/4 gb per day). Is it capable to manage that volumes? Have someone expirience in a similar scenario?
This may be beyond the scope of what Node.js can control, but any pointers to other solutions to this problem are appreciated as well.
I'm uploading large files to a Node.js service via HTTP POST. When this runs locally or on the LAN, the chunks the server receives are always 65536 bytes in size and uploads are fast.
When I upload the same file to the same code running on a remote server (a Google Cloud VM, or real hardware at a co-lo) the received chunks are between 1448-2869 bytes and uploads are much slower (far below the bandwidth limits of the connection).
I'm not sure where the decision is being made to send smaller chunks across the WAN connection, if it's a calculation performed by the client software (both curl and node.js clients produce the same result), or if it's routing hardware in-between slicing up the packets, or something completely different.
I'm wondering if there is anything I can do to force larger chunks through to the server, or perhaps an alternative approach that overcomes the server thrash associated with processing these small chunks?
After a lot of experimentation and discussing this with a network engineer, the heart of the issue is the limits imposed by MTU size when the connection goes beyond a direct Ethernet link.
My solution was to modify the application to use multiple simultaneous requests to essentially "fill the gaps" created by the network overhead of the smaller chunks. The result was substantially faster network transfers at the cost of a little increased complexity.
Ok so I have an idea I want to peruse but before I do I need to understand a few things fully.
Firstly the way I think im going to go ahead with this system is to have 3 Server which are described below:
The First Server will be my web Front End, this is the server that will be listening for connection and responding to clients, this server will have 8 cores and 16GB Ram.
The Second Server will be the Database Server, pretty self explanatory really, connect to the host and set / get data.
The Third Server will be my storage server, this will be where downloadable files are stored.
My first questions is:
On my front end server, I have 8 cores, what's the best way to scale node so that the load is distributed across the cores?
My second question is:
Is there a system out there I can drop into my application framework that will allow me to talk to the other cores and pass messages around to save I/O.
and final question:
Is there any system I can use to help move the content from my storage server to the request on the front-end server with as little overhead as possible, speed is a concern here as we would have 500+ clients downloading and uploading concurrently at peak times.
I have finally convinced my employer that node.js is extremely fast and its the latest in programming technology, and we should invest in a platform for our Intranet system, but he has requested detailed documentation on how this could be scaled across the current hardware we have available.
On my front end server, I have 8
cores, what's the best way to scale
node so that the load is distributed
across the cores?
Try to look at node.js cluster module which is a multi-core server manager.
Firstly, I wouldn't describe the setup you propose as 'scaling', it's more like 'spreading'. You only have one app server serving the requests. If you add more app servers in the future, then you will have a scaling problem then.
I understand that node.js is single-threaded, which implies that it can only use a single core. Not my area of expertise on how to/if you can scale it, will leave that part to someone else.
I would suggest NFS mounting a directory on the storage server to the app server. NFS has relatively low overhead. Then you can access the files as if they were local.
Concerning your first question: use cluster (we already use it in a production system, works like a charm).
When it comes to worker messaging, i cannot really help you out. But your best bet is cluster too. Maybe there will be some functionality that provides "inter-core" messaging accross all cluster workers in the future (don't know the roadmap of cluster, but it seems like an idea).
For your third requirement, i'd use a low-overhead protocol like NFS or (if you can go really crazy when it comes to infrastructure) a high-speed SAN backend.
Another advice: use MongoDB as your database backend. You can start with low-end hardware and scale up your database instance with ease using MongoDB's sharding/replication set features (if that is some kind of requirement).