We have a custom CAF receiver that has both VOD and Live content. Currently our VOD content uses VMAP and we would like to use DAI for live content. I'm not finding much documentation on using DAI with the built in CAF ad functionality.
The CAF ad examples seem to be geared towards VOD content. The documentation that I have found relating to DAI and cast appears to be outdated (v2 instead of v3).
Does anyone have any more information on using a DAI live stream with the CAF receiver?
Is it possible to get the cue points / duration from a DAI live stream to be able to use the server-side ad stitching example?
Just an update: the DAI feature will be launched later this quarter, most likely early Q4.
Related
Not entirely new to azure, but new to the Media Services available on azure. I am looking for suggestion on what azure components I should consider to build a solution to analyze video for certain conditions.
(e.g. 1) Presence of a human - Yes/No, 2) alert if no human presence detected for a certain number of minutes, 3) confirmation if identified human is wearing a uniform or not, etc. )
I have built a somewhat similar on-premise solution in the past using OpenCV & some open source ML libraries, not sure what azure services I can use if this will be running in Azure.
I can live stream this to azure and am not looking for an edge solution.
I looked up azure video indexer and it looks promising, but probably more tuned for audio analysis rather then image frame analysis.
suggestions would be appreciated.
Azure video indexer is optimized for files, not streams, but is capable of meeting the requirement since it detect faces and people (in advanced preset).
Regarding uniform or not, this is not supported in video indexer at the moment but ability to detect cloth color will come in the future.
By fragmenting the video, Azure Video Indexer provides a near live solution. It means there will be a few minutes delay, so it depends on how time-sensitive your requirements are.
Regarding your second question, it will be possible to customize a model to identify specific uniforms in a few months. When the bounding boxes of the uniforms match the bounding boxes of the detected people, you can identify if a person is wearing a uniform.
I'm looking for the solution to upload live video streaming via Instagram SDK.
I have read the doc here, but It seems very basic APIs they are providing.
Is it possible to start live video streaming on Instagram via SDK/APIs?
Thank you.
TL;DR: no IG API, origin policy is the IG app itself.
I know for the moment I they don't plan on offering that functionality, though I expect chat API to come through by the end of the year or in 2019, unless there's a major overhaul of the Live section of the app.
IG Live is an experience made for the typical Instagram use case (mobile phone with the app installed, a camera and internet connection).
Considering the recent release of IG TV, there's definitely a chance they're looking at extending APIs as tech companies generally support programmatic content gymnastics.
You can integrate your software/service with 3d party service called Instafeed.me
Here is the API documentation. You can create/start/stop broadcast, get stats/heartbeat, comments etc.
I am looking for a free alternative to Expression Encoder 4 that can stream an output to IIS Live Smooth Streaming. Does anyone know of any they can recommend?
Thank you.
As far as I know there is no free alternative for Enconder 4 that can output a Smooth Streaming output.
There some paid alternatives, like Wowza and Unified Streaming Platform (USP) among others.
FFMPEG can do this, but for now you are tied to command line mode, until someone from FFSplit or OBS recognizes, that these tools can be easily fixed to support IIS Publishing Points.
Check out the docs (search for the ismv muxer and http output).
Unreal Media server is working for me. It's a bit obtuse to configure and looks like it escaped from Windows 95 but works. The user limit on the free version isn't a concern since there's never more than one user connected--IIS itself.
http://umediaserver.net
Remember to install a DShow codec pack first.
I am thinking on how to build an spotify app that does beat detection (extract bpm of a song).
For that I need to access the raw audio, the waveform, and analyze it.
I am new to building spotify apps.
I know that with "libspotify" you can access raw audio. Can you do the same through the spotify apps API? And how?
For the record, currently exist two spotify apps apis:
Current
Preview
Unless you're really keen on writing that beat detection code yourself, you should look at the APIs provided by the EchoNest, which include that (and many other awesome things).
see Getting the tempo, key signature, and other audio attributes of a song
In a word: no. That isn't currently available in the Apps API.
There’s a new endpoint I guess. See an example https://medium.com/swlh/creating-waveforms-out-of-spotify-tracks-b22030dd442b?source=linkShare-962ec94337a0-1616364513
That uses the endpoint https://developer.spotify.com/documentation/web-api/reference/tracks/get-audio-analysis/
Edit: I agree with commenter #wizbcn that this does not answer this question. Is it sort of incorrect to leave it here because I found this SO post while searching for info about visualizing the tack's waveform as in the linked article? Maybe I should make this a comment instead?
I'm working on a project for teachers and students to be able to have a medium to interact with one another using Azure as a medium for content delivery. However, since this is basically a free service (and a non-profit site), not every teacher can buy a copy of Encoder Pro to encode their streams.
This is where I'm at a crossroads and not sure what path to go down. I want teachers to be able to stream their desktops and interact with students, probably using the MSN chat services or facebook chat services since it's infrastructure that I don't need to pay for. However, additionally the question is how do they capture their desktop? And would Azure be able to convert that into a "smooth streaming" file, so that people with lower bandwith connections can see the stream reliabily? I know Azure can function as a CDN, but I'm not sure if it can do the conversion to live smooth streaming so that students can actually make use of the service.
Any ideas would be helpful.. I'm kind of brainstorming right now and working on the client end of things, but I've slowed down until I can figure out this problem.
Thanks!
To answer part of your question, Azure recently added a Media Services component. It's still in preview mode (free for now). Think of it as a hosted Expression Encoder Pro exposed with a bunch of APIs. For more info https://www.windowsazure.com/en-us/develop/net/how-to-guides/media-services/