How to use a VNC source in Quartz Composer? - vnc

I know I can use video camera sources for further processing with Quartz Composer. (For example you can use the iSight's video feed to do something with that.)
But is it also possible to show/use the source of a VNC server in Quartz Composer? Maybe there is some kind of a VNC Client plugin for Quartz Composer?

I would try using CamTwist as a go between, you can use VNC as a source in CamTwist, and CamTwist can then pose as a camera in QuartzComposer.
You can also use Quartz compositions as filters in CamTwist so you could go very meta.

Related

What is the simplest way to implement small group, low latency, one-to-many audio broadcast

I have a Linode server and need to broadcast one to-many audio (they can hear but can not talk back) to a group of three to five people. I looked at WebRTC and the Janus server but it seems complete overkill. Using commercial applications like Skype, Discord etc. results in low audio quality and it is mono. Best possible audio quality and low latency (on a par with that of Skype, Discord etc.) is essential.
Any pointers would be greatly appreciated.
I can recommend building such system based on Icecast streaming. It's an old proven technology which has a latency close to real-time.
You could use any set of Icecast-enabled tools for that.
As example, here's what you an do with tools by our company:
Larix Broadcaster mobile app allows streaming in audio-only
mode.
Nimble Streamer software media server can get Larix' input and
produce Icecast stream. You can use any Icecast-enabled here
instead.
SLDP Player can play Icecast produced by Nimble
Streamer or any other Icecast-enabled server.
That can also be built with other companies products, so you can pick the right tools yourself.
A super simple setup would be to just use command line tool called ffmpeg (it also has an api) see doc at https://trac.ffmpeg.org/wiki/ffserver
Where your source audio lives just launch either the ffmpeg or ffserver
ffserver -f /etc/ffserver.conf
in that config put location of source audio and output url it will publish to ... then your client receivers can use ffplay with
ffplay <stream URL>
ffmpeg is a free open source industry workhorse for audio/video manipulation ... its the underlying technology several more visable tools like vlc use under the covers

Audio hooking or a custom audio driver for audio processing and routing to the default audio device

I have developed a pretty complex audio software for my client with plugins for Winamp, Windows Media player and VST. Now the client is interested in some method to avoid maintaining the multitude of plugins, we have no way to support all the media players out there.
The client does not care for Unix/Mac yet, so I can look only at Windows XP and Vista/7/
Basically, what we need is a way to always reliably intercept as much audio stream protocols as possible (well, except maybe ASIO, that's another story, I guess), then pass this audio through our custom effects engine and then route back to the default audio device, whatever it is.
Now I am thinking, what options do I have (theoretically).
I could use hooks. I need to hook globally older vaweOut and also DirectSound.
But will this still work on Vista/7?
I could use a virtual driver, like the author of the Virtual Audio Cable did:
http://software.muzychenko.net/eng/vac.htm
Seems a pretty daunting task. Anyway, the client will contact the author of VAC to see if he agrees to sell his source code for a reasonable price.
This driver could install itself as a default audio output device, intercept the audio stream from Windows, and pass it back to default device. Hmm, but what about various DirectSound audio buffers, do I have to mix them myself or is there any way I could tell Windows mixer to mix all for me and pass a single mixed audio stream?
It seems, this custom driver will of course kill all the hardware audio acceleration, but we can live with that, if we warn our customers about this issue.
As I understand, the most current Windows driver standard is WDF.
But maybe it does not work for audio on Windows Vista/7?
I know, Vista/7 has a different audio stack from XP.
If I can do it using WDF, what driver should I write - kernel mode or user mode?
Maybe I am missing more elegant and simple options to intercept, process and route audio on Windows?
Try Virtual Audio Streaming SDK. Also virutal sound card and let you read/process audio data in realtime.
http://www.virtualaudiostreaming.net/sdk-license.html

Web App with Microphone Input

I'm working on a C++ application which takes microphone input, processes it, and plays back some audio. The processing will incorporate a database located on a server. For ease of creating UI and for maximum portability, I'm thinking it would be nice to have the front end be done in HTML. Essentially, I want to record audio in a browser, send that audio to the server for processing, and then receive audio from the server which will then be played back inside the browser.
Obviously, it would be nice if HTML5 supported microphone input, but it does not. So, I will need to create a plugin of some kind in order to make this happen. NPAPI scares me because of the security issues involved, so I was looking into PPAPI and Native Client. Native Client does not yet support microphone input, and I believe that the PPAPI audio input API would be limited to a dev build of Chrome. FireBreath doesn't look like it supports any microphone function either. So, I believe my options are:
Write my own NPAPI plugin to record the audio
Use Flash to get microphone input
Bail on browsers altogether and just make a native application
The target audience for this is young children and people who aren't computer-adept. I'd like to make it as portable and simple to use as possible. Any suggestions?
If you can do it all in Flash and have the relevant knowledge, that would probably be the best solution:
You can avoid writing platform-specific code, delivery/updating is easy and Flash has broad coverage so users don't need to install any custom plugins.
FireBreath doesn't look like it supports any microphone function either.
You can write your own (platform-dependent) code for audio recording with FireBreath, just like you could in a plain NPAPI plugin. FireBreath just makes it easier for you to write the plugin, the result is still a NPAPI (and ActiveX) plugin with access to native APIs etc.
You can use Capturing Audio & Video features in HTML5, see this link for more information.

Programming webcam on Linux

I want to be able to capture images from a webcam on Linux. This is still a project requirement and I'm having a difficulty finding up-to-date information about capturing images from a webcam on Linux. Is it true that every webcam has different API (unlike the Windows variant where I can use a common API), so I must write the program for a specific webcam?
Webcams on Linux are accessed through the Video4Linux API, which is common across all camera models.
There are plenty of existing framegrabbers for webcams that use this API - you could look at these for ideas, or just one as-is.

How to program an audio/video application on network?

I want to make (for fun, challenge) a videoconference application, I have some ideas about this:
1) taking the audio/video streams (I don't know what an audio/video stream is)
2) pass this to a server that lets communicate the clients. I can figure out how to write a server(there are a lot of books and documentation about this) but I really don't know how to interact with the webcam and with the audio/video in general.
I want some links, book, suggestions about the basics of digital audio/video expecially on programming. Please help me!!!
I want to make it run on a Linux platform.
Linux makes video grabbing really nice. As long as you have a driver that outputs the video stream to the /dev/video/v* channels. All you have to do is open up a control connection to the device [an exercise for the OP] and then read in the channel like a file [given the parameters set by the control connection. Audio should be the same way, but don't quote me on it.
BTW: Video streaming from a server is a very complex issue. You have to develop or use an existing protocol. You have to be very aware of networking delays, and adjust the information sent (resize or recompress) to the client based on the link size between the client and the server.

Resources