I need to implement a one-to-one videoconferencing solution server-based, runnable by browser, free (or not expensive), ssl support and good quality video and audio. What would you advise me?
See WebRTC. Chrome and Firefox will be supporting it (early support is in Chrome now, soon will be in Firefox). It appears Microsoft will be supporting it too - they're hiring engineers specifically for WebRTC work.
Related
We are developing a multimedia conference application and want to connect to Lync or Skype for Business.
Now we can transfer video stream (H264) and audio stream between Skype and our client.
But sharing stream stuck us,especially parsing the RDP protocol.
We have got the RDP stream,but how to get sharing contents?(Only the graphics data)
Our application run on Linux, Mac and Windows(mostly on Linux).
So is there any third-party solutions to deal with the RDP?
As I known, Polycom has completed this function.
Please keep noted that the Microsoft Remote Desktop Protocol (RDP) was replaced by Video Based Screen Sharing (VBSS) when using Skype for business. See here or here for more infos. As VBSS is the new standard, this might explain your issue. A possible solution might be to check how you can start using Video Based Screen Sharing (VBSS). I wouldn´t start building something on RDP as Microsoft might remove that in further version as there is VBSS.
However as you didn´t specify more infos for your "Solution" its not easy to give you and advise. Keep also noted that this isn´t the case for Lync. But as you specified Skype for Business and Lync I´m not sure if you see the issues with both server versions.
Im learning to code web stuff. ruby,javascript...
I would like to do something that makes noise like www.audiotool.com
The app is basically a DAW, digital audio workstation, is fast and sounds good... you can eveb use samples and save projects in the cloud.
But my main question is which languages or tools can make an app like this ?
but i don't know which languages make this kind of apps posible ?
is it creating the sound in the browser, or in a server and sending it back ?
any guesses?
Audiotool.com uses flash to synthesize audio. Their FAQ says that you should update your flash player if you're having trouble, so that seems like a pretty strong indication that they use flash.
However, if you want to make music apps, I would advise against using flash. Newer devices and operating systems will drop support for flash (iPhones/iPads already don't support flash, I believe).
If you want a future-proofed music-making solution, you can do that all client-side in javascript with the web-audio api.
I have authored, and actively maintain a javascript library that aims to simplify the process of building complex apps with the web audio api. If you're just getting started with making music on the web, you might want to check it out. The web audio api is not terribly beginner-friendly, in my opinion. https://github.com/rserota/wad
I'm trying to understand what the introduction of the Web Audio API has meant for the development of web based games.
Flash games can of course do some quite advanced audio processing, and for simpler games the audio element was maybe enough. But how has Web Audio API changed the game dev scene? In terms of what can be done, supported platforms and so on.
Supported platforms are Chrome, Safari (with some prefixing caveats) and Firefox across all supported hardware/OS platforms; IE is working on development, though the longer tail of versions will take a while to deploy.
Web Audio enables very complex processing, but also very precise timing and multiple sounds; sound management is far, far easier than previously possible in HTML5. In short, Web Audio dramatically improves the story for game audio development on the Web - which, of course, was one of its goals.
I need to know if TideKit will be able to stream live video and audio from device cameras and microphones. The Android and IOS APIs allow for this. I think Flex can do it. I asked about this on the Twitter page but I'm looking for a more definitive answer. The one I got was "TideKit is a development, not a streaming platform but you could develop an app for that! That’s where TideKit comes into play" which doesn't fully answer the question.
The goal is to stream video from Android & IOS cameras and audio from the device microphones to a media streaming server such as Flash Media Server or a Wowza streaming server using either RTMP or HTTP streaming from the app to the server. That or it would work if the stream were sent live in any other way to a server socket and then encoded for redistribution via a streaming server.
They key here though is "live" rather than having to waiting for a video or audio file to become complete before sending it off to the server. I know it's possible with the APIs and I really hope TideKit will be able to do this because no other platform similar to TideKit (and there are MANY) can do this besides Flex. I've poured through countless SDK documents. If TideKit can do this it will attract a lot more customers.
Eagerly awaiting a response,
Thanks
#xendi Thank you for your question. TideKit is an app development platform. You can use it for any type of app development for mobile, desktop and web. We've purposefully kept the core of TideKit small. This is to ensure its core is extremely stable and that most functionality can can come through modules.
Out of the box, TideKit has core AV functionality on all platforms. Extension of this functionality is through TideKit modules that have operating system implementations or from pure JavaScript modules. There are almost 100,000 modules of pure JavaScript functionality now available to you through existing repositories including NPM, Bower and Component that can simply be consumed in CommonJS.
When a TideKit or JavaScript module is installed it offers its APIs. This extends the APIs with those already available. Either way those APIs become available to you in JavaScript.
You already have access to camera with TideKit. The rest is handling the streaming protocol, ie RTSP, RTMP, HTTP etc. So there are a few ways to accomplish what you want with TideKit.
Using a TideKit module that supports the streaming protocols by interacting with its APIs in JavaScript.
Using a pure JavaScript solution from a repository together with TideKit that supports the protocols.
Writing your own TideKit module that ties together with APIs of the operating systems.
Writing the solution in pure javascript using TideKit's camera and network APIs.
TideKit is new and has not yet formally launched. We are currently in a reservations mode. We will be delivering it first to those with reservations and it will be gradually rolled out. Demos are currently being prepared to demonstrate the speed and low barrier to development. When TideKit formally launches, I would check for the availability of modules at that point (for both TideKit and JavaScript implementations). Note that not all possible functionality in TideKit modules will be available with the launch. New modules will be releasing over time.
As an aside, TideKit also supports WebRTC in HTML5 so this could work together with TideKit's other capabilities for interesting possibilities.
I know I could resolve the problem easily with Red5 Media Server. I'm just curious, since I need a node server anyway and wondered if I can bypass Red5. Using the HTML5 camera api is not an option since I target up to four cameras at the same time.
I didn't see anything specific for Node.js to work with webcams. However, I saw this solution online which uses JavaScript, NodeJs, HTML5 (canvas) and WebSockets mainly to achieve frames out of video streaming then it uses the websockets to transfer the image into HTML5 canvas using NodeJs as an intermediary server.
hope it helps.
I'm the author of https://www.npmjs.com/package/camera-capture which supports portable easy to use API for node.js (server side) to capture camera video, audio and even desktop. Doesn't require any native libraries / dependencies and is easy to install - based on puppeteer. I'm already using it on desktop apps based on gtk, cairo, qt, and others and it's behaving acceptably fast although I keep looking for optimizations since the project it's pretty new. Feedback is most welcome.