Cirrus or Adobe Media Server - flash-media-server

What is the difference between using Cirrus or FMS? What are the pros and cons the limitations and advantages of each?
Thank You

this link might be helpful http://labs.adobe.com/technologies/cirrus/
Unlike Adobe Media Server,Cirrus does not support media relay, shared objects, scripting, etc. So by using Cirrus, you can only develop applications where Flash Player endpoints are directly communicating with each other.

Related

Azure media Services player

I'm feeling that this is a really dumb question, but my research tells me I have to create my own player. Is that true?
I have a link (publish URL) from Azure Media services like this:
http://streamvideotest.streaming.mediaservices.windows.net/4ed49a08-f82d-462e-a05e-acea910064a5/7g91d5d8-b213-406c-90d8-75a3a5e2456d.ism/Manifest
Which I would like to hand out to a few people to play the video, or live feed at that channel. But you need some type of player? I've tried Windows Media Player (open URL), but that always fails.
It depends on which platform you want to reach.
If you are trying to play live stream on PC using Smooth Streaming, you could use Silverlight Player (http://smf.cloudapp.net/healthmonitor). Or if you want to stream DASH through modern browser (IE or Chrome), you could do it through HTML5 video tag natively.
If you are trying to reach out to iOS platform, you could do it natively by delivering HLS stream - append (format=m3u8-aapl) in your link above.
This is an article that describe different players you could use in different platform: https://msdn.microsoft.com/en-us/library/azure/dn223283.aspx.
As said, Azure Media Services has just rolled out Azure Media Player, which could detect the capability of the platform and feed in right streaming protocols by using the right player technology. please check out http://azure.microsoft.com/blog/2015/04/15/announcing-azure-media-player/.
Thanks,
Mingfei Yan
The Azure Media Player has officially been rolled out at: http://amsplayer.azurewebsites.net/
Of course, Mingfei is the Queen of Azure Media Services and the most definitive answer to any and all AMS questions!
Just to add to this thread - As Mingfei mentioned it depends on where you want your video to work. However, if you have a browser based offering, we'd reccomend starting with Azure Media Player.
http://azure.microsoft.com/blog/2015/04/15/announcing-azure-media-player/

In which language is www.audiotool.com programmed?

Im learning to code web stuff. ruby,javascript...
I would like to do something that makes noise like www.audiotool.com
The app is basically a DAW, digital audio workstation, is fast and sounds good... you can eveb use samples and save projects in the cloud.
But my main question is which languages or tools can make an app like this ?
but i don't know which languages make this kind of apps posible ?
is it creating the sound in the browser, or in a server and sending it back ?
any guesses?
Audiotool.com uses flash to synthesize audio. Their FAQ says that you should update your flash player if you're having trouble, so that seems like a pretty strong indication that they use flash.
However, if you want to make music apps, I would advise against using flash. Newer devices and operating systems will drop support for flash (iPhones/iPads already don't support flash, I believe).
If you want a future-proofed music-making solution, you can do that all client-side in javascript with the web-audio api.
I have authored, and actively maintain a javascript library that aims to simplify the process of building complex apps with the web audio api. If you're just getting started with making music on the web, you might want to check it out. The web audio api is not terribly beginner-friendly, in my opinion. https://github.com/rserota/wad

What libraries/APIs allow me access real time audio waveforms of a phone call?

I am looking to build an app that needs to process incoming audio on a phone call in real time.
WebRTC allows for this but i think this works only in their browser based P2P audio communications functionality but not for phone calls/ VOIP.
Twilio and Plivo allow you record the audio for batch/later processing.
Is there a library that will give me access to the audio streams in real time? If not, what would I need to build such a service from scratch?
Thanks
If you are open to using a media server (so that the call is not longe P2P but it's mediated by the media server using a B2B model), then perhaps the Kurento Media Server may solve your problem. Kurento Media Server makes possible to create processing capabilities which are applyied in real time onto the media streams. There are many examples in the documentation of computer vision and augmented reality algorithms applied in real time over the video streams. I've never seen an only-audio processing module, but it should be simple to implement just by creating an additional module, which is not too complex if you have some knowledge about C/C++ and media processing concepts.
Disclaimer: I'm part of the Kurento development team.

Will TideKit be able to stream live video & audio from Android & IOS cameras & mics to a server?

I need to know if TideKit will be able to stream live video and audio from device cameras and microphones. The Android and IOS APIs allow for this. I think Flex can do it. I asked about this on the Twitter page but I'm looking for a more definitive answer. The one I got was "TideKit is a development, not a streaming platform but you could develop an app for that! That’s where TideKit comes into play" which doesn't fully answer the question.
The goal is to stream video from Android & IOS cameras and audio from the device microphones to a media streaming server such as Flash Media Server or a Wowza streaming server using either RTMP or HTTP streaming from the app to the server. That or it would work if the stream were sent live in any other way to a server socket and then encoded for redistribution via a streaming server.
They key here though is "live" rather than having to waiting for a video or audio file to become complete before sending it off to the server. I know it's possible with the APIs and I really hope TideKit will be able to do this because no other platform similar to TideKit (and there are MANY) can do this besides Flex. I've poured through countless SDK documents. If TideKit can do this it will attract a lot more customers.
Eagerly awaiting a response,
Thanks
#xendi Thank you for your question. TideKit is an app development platform. You can use it for any type of app development for mobile, desktop and web. We've purposefully kept the core of TideKit small. This is to ensure its core is extremely stable and that most functionality can can come through modules.
Out of the box, TideKit has core AV functionality on all platforms. Extension of this functionality is through TideKit modules that have operating system implementations or from pure JavaScript modules. There are almost 100,000 modules of pure JavaScript functionality now available to you through existing repositories including NPM, Bower and Component that can simply be consumed in CommonJS.
When a TideKit or JavaScript module is installed it offers its APIs. This extends the APIs with those already available. Either way those APIs become available to you in JavaScript.
You already have access to camera with TideKit. The rest is handling the streaming protocol, ie RTSP, RTMP, HTTP etc. So there are a few ways to accomplish what you want with TideKit.
Using a TideKit module that supports the streaming protocols by interacting with its APIs in JavaScript.
Using a pure JavaScript solution from a repository together with TideKit that supports the protocols.
Writing your own TideKit module that ties together with APIs of the operating systems.
Writing the solution in pure javascript using TideKit's camera and network APIs.
TideKit is new and has not yet formally launched. We are currently in a reservations mode. We will be delivering it first to those with reservations and it will be gradually rolled out. Demos are currently being prepared to demonstrate the speed and low barrier to development. When TideKit formally launches, I would check for the availability of modules at that point (for both TideKit and JavaScript implementations). Note that not all possible functionality in TideKit modules will be available with the launch. New modules will be releasing over time.
As an aside, TideKit also supports WebRTC in HTML5 so this could work together with TideKit's other capabilities for interesting possibilities.

How to communicate with mobile devices using Bluetooth in j2me?

I need to develop a project based on Bluetooth in mobile. Since I am new to j2me I studied some of the articles and run the project until the discovery of devices and services. I need to communicate between devices and transfer the desired files. I search code for client server communication through Bluetooth and got it but I didn't know how to run those code and implement further.
I have go through articles and I can run client server communication. Now I need to transfer the file and communicate to the user which was beyond the limit of my mobile through the another mobile which was within my limit.
JSR82.com has many articles and tutorials about how to use bluetooth from J2ME.
Better you refer the book, "BLUETOTH APPLICATION PROGRAMMING WITH JAVA API" by C.Balakumar. It is helpfull to you.

Resources