I'm playing around with Azure MediaServices and the live streaming feature. I'm using OBS as my streaming source and just trying to stream my desktop from my laptop and view it on my desktop machine.
It all works fine but there is a tremendous lag time (north of 30s). That's not really a "live" stream. I tried creating my live event with "low latency" checked to see if that would improve the lag time and it doesn't appear to have done anything.
I'm just doing a simple pass-through so no encoding on the server or anything. Is there something else I can do to improve the lag time besides the low latency toggle?
There's good info on https://learn.microsoft.com/en-us/azure/media-services/latest/live-event-latency that talks about the low latency feature on the live event as well as a setting you set on the player. It also discusses the expected latencies.
Related
We're using websockets (specifically uws.js on Node.js) to run a multiplayer quiz. The server's running on an AWS t2.micro in the eu-west-2a region, but recently, we've been seeing some incredibly high latency from some players - yet only on an intermittent basis.
By latency, what I'm actually measuring is the time taken from sending a broadcast message (using uws's built in pub-sub), to the player's device telling the server they've safely received it. The message we're tracking tells the player's device to move on to the next phase of the quiz, so it's pretty critical to the workings of the application. Most of the time, for players in the UK, this time is somewhere around 30 - 60 ms, but every now and then we're seeing delays of up to 17s.
Recently, we had a group on the other side of the world to our server do a quiz, and even though there were only 10 or so players, so the server's definitely not being overloaded, we saw roughly half that group intermittently getting these very, very high latency spikes, where it'd take 12, 17, 22, or even 39(!!) seconds for their device to acknowledge having received the message. Even though this is a slow paced quiz, that's still an incredibly amount of latency, and something that has a real detrimental effect.
My question is, how do I tell what's causing this, and how do I fix it? My guess would be it's something to do with TCP and its in-order delivery, combined with some perhaps dodgy internet connections, as one of the players yesterday seemed to receive nothing for 39 seconds, then three messages all in a row, backed up. To me that suggests packet loss, but I don't know where to even begin when trying to resolve it. I also haven't yet figured out how to reproduce it (I've never seen it happen when I've been playing myself), which makes things even harder.
TCP routing issues are unlikely to cause extreme delays of 17+seconds. Are you sure there is no "store and forward" queuing system that is buffering your messages on a server or perhaps a cloud pub/sub queue?
Another important check: t2.micro is just about the cheapest, least-assured networking QoS VM you can boot on AWS. No throughput and no jitter assurances on the network performance.
You may wish to review:
EC2 network benchmarking guide from AWS including MTU parameters
Available instance bandwidth for EC2
t2.micro does not have any minimum baseline assured bandwidth for example.
I have created a channel on Azure Media Services, I correctly configured it as an RTMP channel and streamed a live video with Android + FFMpeg libraries.
The problem is the client end-point latency.
I need a maximum latency ~2 seconds, but now I have about ~25 seconds!
I'm using Azure Media Player in a browser page to stream the content.
Do you know a configuration of the client/channel which can reduce the latency?
Thanks
As you pointed , there are few factors which affects latency.
Total delay time =
time to push video from client to server
server processing time
latency for delivering content from server to client.
Check https://azure.microsoft.com/en-us/documentation/articles/media-services-manage-origins/#scale_streaming_endpoints to see how you can minimize #3 mentioned above by configuring cdn and scaling streaming end units.
Given these 3 components, i don't think at this stage you will be able archive less than 2 seconds end to end delay globally from Android client to browser client.
Easiest way to check latency is ffplay --fflags nobuffer rtmp:///app/stream_name
As I did in this video https://www.youtube.com/watch?v=Ci646LELNBY
Then if there's no latency by ffplay, it's the player that introduce latency
Before you down-vote this question, please note that I have already searched Google and asked on Apple Developer Forums but got no solution.
I am making an app that uses core data with iCloud. Everything is set up fine and the app is saving core data records to the persistent store in the ubiquity container and fetching them just fine.
My problem is that to test if syncing is working fine between two devices (on the same icloud ID), I depend on NSPersistentStoreDidImportUbiquitousContentChangesNotification to be fired so that my app (in foreground) can update the table view.
Now it takes random amount of time for this to happen. Sometimes it takes a few seconds and at times even 45 minutes is not enough! I have checked my broadband speed several times and everything is fine there.
I have a simple NSLog statement in the notification handler that prints to the console when the notification is fired, and then proceeds to update the UI.
With this randomly large wait time before changes are imported, I am not able to test my app at all!
Anyone knows what can be done here?
Already checked out related threads...
More iCloud Core Data synching woes
App not syncing Core Data changes with iCloud
PS: I also have 15 GB free space in my iCloud Account.
Unfortunately, testing with Core Data + iCloud can be difficult, precisely because iCloud is an asynchronous data transfer, and you have little influence over when that transfer will take place.
If you are working with small changes, it is usually just 10-20 seconds, sometimes faster. But larger changes may get delayed to be batch uploaded by the system. And it is also possible that if you constantly hit iCloud with new changes — which is common in testing — it can throttle back the transfers.
There isn't much you can really do about it. Try to keep your test data small where possible, and don't forget the Xcode debug menu items to force iCloud to sync up in the simulator.
This aspect of iCloud file sync is driving a lot of developers to use CloudKit, where at least you have a synchronous line of communication, removing some of the uncertainty. But setting up CloudKit takes a lot of custom code, or moving to a non-Apple sync solution.
Hope someone can help out here. I have been trying out the IIS Smooth Streaming for many weeks. But without much success.
On-demand Smooth Streaming
No problems with streaming on-demand clips on LANNo problems with streaming on-demand clips across the web
Live streaming using file source
No problems with live streaming a file on LANCannot live stream a file across the webInitiated a publishing point using AWS-EC2Connected Encoder Pro to publishing pointPublishing point never gets past "Starting"
Live streaming using live webcam
Slight problem with live streaming from my webcam on LAN10 seconds lagAfter like 20 seconds, silverlight client hangs and stops requesting for chunksHTTP 412 - precondition failedOnly way to rectify is to refresh the browser
Cannot live stream from webcam across the webInitiated a publish point using AWS-EC2Connected Encoder Pro to publishing pointPublishing point never gets past "Starting"
Things I have tried to rectify network problems
Connecting my laptop directly to the gateway, rather than through a routerShutting down Windows firewall on laptopInitiating a AWS-EC2 with no firewallWireShark indicates HTTP404 and HTTP501 error when "connecting" to the publishing point from the encoder
My LAN specs
Running Encoder, and IIS Streaming Server on Boot Camp MacBook Pro, i7, 2.2GHz, 4GB RAMRunning Silverlight Client on i5, 2.53GHz, 4GB RAMOutput Stream: Default configurations for H.264 IIS Smooth Streaming Low Bandwidth
To test streaming stuff you really need to use separate PCs as the streamer/encoder, transport server and client. Or at least start off that way. You are asking a bit too much out of that macbook pro there, especially when it comes to I/O.
Our site allows users to record videos using their webcams. This video is saved by FMS 3.5 on the server and then processed. But some users with low bandwidth end up having videos where the video freezes and audio continues playing. From my research, it looks like setting the setting on the FMS server to a higher number will help with this.
The default value is 5. Before I go and make this value much higher, I'd like to know the potential impact to the server. Obviously increasing the value will increase resource usage, but what resources? Will the buffer be stored in memory, or on disk?