I know I could resolve the problem easily with Red5 Media Server. I'm just curious, since I need a node server anyway and wondered if I can bypass Red5. Using the HTML5 camera api is not an option since I target up to four cameras at the same time.
I didn't see anything specific for Node.js to work with webcams. However, I saw this solution online which uses JavaScript, NodeJs, HTML5 (canvas) and WebSockets mainly to achieve frames out of video streaming then it uses the websockets to transfer the image into HTML5 canvas using NodeJs as an intermediary server.
hope it helps.
I'm the author of https://www.npmjs.com/package/camera-capture which supports portable easy to use API for node.js (server side) to capture camera video, audio and even desktop. Doesn't require any native libraries / dependencies and is easy to install - based on puppeteer. I'm already using it on desktop apps based on gtk, cairo, qt, and others and it's behaving acceptably fast although I keep looking for optimizations since the project it's pretty new. Feedback is most welcome.
Related
Im learning to code web stuff. ruby,javascript...
I would like to do something that makes noise like www.audiotool.com
The app is basically a DAW, digital audio workstation, is fast and sounds good... you can eveb use samples and save projects in the cloud.
But my main question is which languages or tools can make an app like this ?
but i don't know which languages make this kind of apps posible ?
is it creating the sound in the browser, or in a server and sending it back ?
any guesses?
Audiotool.com uses flash to synthesize audio. Their FAQ says that you should update your flash player if you're having trouble, so that seems like a pretty strong indication that they use flash.
However, if you want to make music apps, I would advise against using flash. Newer devices and operating systems will drop support for flash (iPhones/iPads already don't support flash, I believe).
If you want a future-proofed music-making solution, you can do that all client-side in javascript with the web-audio api.
I have authored, and actively maintain a javascript library that aims to simplify the process of building complex apps with the web audio api. If you're just getting started with making music on the web, you might want to check it out. The web audio api is not terribly beginner-friendly, in my opinion. https://github.com/rserota/wad
I need to know if TideKit will be able to stream live video and audio from device cameras and microphones. The Android and IOS APIs allow for this. I think Flex can do it. I asked about this on the Twitter page but I'm looking for a more definitive answer. The one I got was "TideKit is a development, not a streaming platform but you could develop an app for that! That’s where TideKit comes into play" which doesn't fully answer the question.
The goal is to stream video from Android & IOS cameras and audio from the device microphones to a media streaming server such as Flash Media Server or a Wowza streaming server using either RTMP or HTTP streaming from the app to the server. That or it would work if the stream were sent live in any other way to a server socket and then encoded for redistribution via a streaming server.
They key here though is "live" rather than having to waiting for a video or audio file to become complete before sending it off to the server. I know it's possible with the APIs and I really hope TideKit will be able to do this because no other platform similar to TideKit (and there are MANY) can do this besides Flex. I've poured through countless SDK documents. If TideKit can do this it will attract a lot more customers.
Eagerly awaiting a response,
Thanks
#xendi Thank you for your question. TideKit is an app development platform. You can use it for any type of app development for mobile, desktop and web. We've purposefully kept the core of TideKit small. This is to ensure its core is extremely stable and that most functionality can can come through modules.
Out of the box, TideKit has core AV functionality on all platforms. Extension of this functionality is through TideKit modules that have operating system implementations or from pure JavaScript modules. There are almost 100,000 modules of pure JavaScript functionality now available to you through existing repositories including NPM, Bower and Component that can simply be consumed in CommonJS.
When a TideKit or JavaScript module is installed it offers its APIs. This extends the APIs with those already available. Either way those APIs become available to you in JavaScript.
You already have access to camera with TideKit. The rest is handling the streaming protocol, ie RTSP, RTMP, HTTP etc. So there are a few ways to accomplish what you want with TideKit.
Using a TideKit module that supports the streaming protocols by interacting with its APIs in JavaScript.
Using a pure JavaScript solution from a repository together with TideKit that supports the protocols.
Writing your own TideKit module that ties together with APIs of the operating systems.
Writing the solution in pure javascript using TideKit's camera and network APIs.
TideKit is new and has not yet formally launched. We are currently in a reservations mode. We will be delivering it first to those with reservations and it will be gradually rolled out. Demos are currently being prepared to demonstrate the speed and low barrier to development. When TideKit formally launches, I would check for the availability of modules at that point (for both TideKit and JavaScript implementations). Note that not all possible functionality in TideKit modules will be available with the launch. New modules will be releasing over time.
As an aside, TideKit also supports WebRTC in HTML5 so this could work together with TideKit's other capabilities for interesting possibilities.
I'm working on a web app in node.js to allow clients to view a live streaming video via a unique url that another client will broadcast from their webcam, i.e., http://myapp.com/thevideo
I understand that webRTC is still not supported in enough browsers to be useful.
I would also like to save this the video stream to be viewed later within the app.
Things get somewhat confusing as I try to narrow down a solution to make this work.
I would like to get some recommendations on proven solutions out there to make this work on desktop and mobile? Any hints would be great.
I'll make a quick suggestion based on the limited details. I would use ffmpeg to encode to HLS. This format will playback natively on iOS and safari on Mac. For all other platforms, either provide an rtmp stream with a flash front end, or use jw player 6 commercial version that can play HLS. Or use a wowza server to handle this all for you.
I'm working on a C++ application which takes microphone input, processes it, and plays back some audio. The processing will incorporate a database located on a server. For ease of creating UI and for maximum portability, I'm thinking it would be nice to have the front end be done in HTML. Essentially, I want to record audio in a browser, send that audio to the server for processing, and then receive audio from the server which will then be played back inside the browser.
Obviously, it would be nice if HTML5 supported microphone input, but it does not. So, I will need to create a plugin of some kind in order to make this happen. NPAPI scares me because of the security issues involved, so I was looking into PPAPI and Native Client. Native Client does not yet support microphone input, and I believe that the PPAPI audio input API would be limited to a dev build of Chrome. FireBreath doesn't look like it supports any microphone function either. So, I believe my options are:
Write my own NPAPI plugin to record the audio
Use Flash to get microphone input
Bail on browsers altogether and just make a native application
The target audience for this is young children and people who aren't computer-adept. I'd like to make it as portable and simple to use as possible. Any suggestions?
If you can do it all in Flash and have the relevant knowledge, that would probably be the best solution:
You can avoid writing platform-specific code, delivery/updating is easy and Flash has broad coverage so users don't need to install any custom plugins.
FireBreath doesn't look like it supports any microphone function either.
You can write your own (platform-dependent) code for audio recording with FireBreath, just like you could in a plain NPAPI plugin. FireBreath just makes it easier for you to write the plugin, the result is still a NPAPI (and ActiveX) plugin with access to native APIs etc.
You can use Capturing Audio & Video features in HTML5, see this link for more information.
I have a large amount of audio stored on my web server in a very custom format that can't be replayed by anything other than my own application. That application is a Win32 app that can connect to my web server and stream and replay that audio.
I'd really like to be able to do the streaming and replaying from within a browser, but don't know where to start. Ideally I'd like the technology to be cross-platform (unlike my current Win32 app) and cross-browser (IE 6 and above and Firefox).
My current thoughts are to look at things like:
Flash, but doesn't that only replay mp3 audio?
Java, are VMs freely available still?
Converting the audio to a WAV file on the web server and then using someone else's plugin to replay that file. I'd rather keep the conversion off the web server for performance reasons, but is still an option.
Writing my own custom plugin to do the complete stream and replay operation.
Any guidance would be most useful.
Please note that the audio is not music and that simply converting to another audio format is not trivial. The audio that is stored also changes frequently (every minute) would need constant conversion.
Why are you using a proprietary music format? I'd probably not even bother downloading a program to listen to it.
I would suggest you convert it to mp3 and then use flash.
Building your own plugin would probably be hard, there are so many different platforms you'd have to cater for, something like flash is written for them already.
Apart from converting server-side: Implement a decoder for your format in ActionScript or Java. Then you can write a Flash movie or Java applet that plays it. Both languages/runtimes should be fast enough to decode in realtime unless your format is very complex. Flash would be the more accessible of the two, since nearly everyone has the plugin installed. (It's possible that playing a raw sound buffer isn't supported by older Flash versions than 10, I'm no expert on that.) The Java plugin is definitely free, but you'd require the users to install it.
I'd go with converting the audio to WAV (or MP3) on the server. Writing your own cross-platform browser component would be a lot of work, thanks to the different ways the major OSes handle their audio APIs.
Try taking a look at shoutcast.
Basically its a server app that will stream music to any client that connects to it through a browser (effectively your own radio station). I've never used it myself but should be straight forward.
Another idea is winamp remote. Again you install the app on the server but this time you can browse your music collection on their website and play individual songs.