I'm trying to write a program that uses GStreamer to connect to PulseAudio as a source so that I can intercept any audio that's being played. I have no need to play it back so my assumption is that my pipeline only needs a source and demuxer, I'm not sure of the latter though. The hello world example that I'm working off of is here, except that instead of using "filesrc" I'm using "pulsesrc".
Is there a good example that shows this out there already and I haven't found the right combination of terms to make Google give it to me? Do you have to do anything to PulseAudio to make it let you monitor its stream? Should I actually be trying to instead connect to a sink to monitor what's being played?
I think you will need to check the sources with e.g.:
pacmd list-sources | grep -e device.string -e 'name:'
and then connct to the source ending on ".monitor" by using the "device" property of pulsesrc.
Related
I would like to automute certain audio clients when I want, without human intervention (e.g. pulling up pavucontrol) when using Fedora/pulsewire. This worked before using pacmd, but that doesn't work under pipewire; and the replacement command, pw-cli, doesn't support set-sink-input-mute or set-source-output-mute.
pw-cli doesn't seem to support muting that I could see. pw-mon shows a relevant change when using pavucontrol (or easyeffects) to mute a stream, but that didn't help me figure out how to do it myself.
This is a bit late but i recently needed it too. Instead of pacmd you can use pactl set-sink-input-volume <sink-id> <volume>
I am trying to create snapshots from a video stream using the "scene" video filter. I'm on Windows for now, but this will run on Linux I don't want the video output window to display. I can get the scenes to generate if I don't use the --vout=dummy option. When I include that option, it does not generate the scenes.
This example on the Wiki indicates that it's possible. What am I doing wrong?
Here is the line of code from the LibVLCSharp code:
LibVLC libVLC = new LibVLC("--no-audio", "--no-spu", "--vout=dummy", "--video-filter=scene", "--scene-format=jpeg", "--scene-prefix=snap", "--scene-path=C:\\temp\\", "--scene-ratio=100", $"--rtsp-user={rtspUser}", $"--rtsp-pwd={rtspPassword}");
For VLC 3, you will need to disable hardware acceleration which seems incompatible with the dummy vout.
In my tests, it was needed to do that on the media rather than globally:
media.AddOption(":avcodec-hw=none");
I still have mainy "Too high level or recursion" errors, and for that, I guess you'd better open an issue on videolan's trac.
I try to open a rtsp stream using the class VideoCapture's funtion open.
I want to set timeout,but i do not know how to do.
I'm keeping this answer short, because I'm still looking for a better solution: there is a ffmpeg wrapper, which can be compiled to a ffmpeg dll file, which I guess replaces OpenCV's ffmpeg dll.
This ffmpeg wrapper is built in such a way, that a timeout can be set in milliseconds. I'll hopefully edit this when I have more information.
This is the wrapper, it's supposed to be easy to test in VS2012: Git
I'm starting my project which is simply about reading the I/Q data from SDR Radio software like GNU Radio as an input for my own application. I thought about using the pipe command to do so, but don't really know how to use it in this case. Another idea is to get I/Q data directly from sound card.
I would like to ask you what is the most effective way to get these data. Thanks.
Named pipes are a very common way to do this. The concept is simple. First, you create a named pipe using the mkfifo command:
$ mkfifo my_named_pipe
$ ls -l
prw-rw-r-- 1 user user 0 Dec 16 10:04 my_named_pipe
As you can see, there's a new file-like thing with a 'p' flag.
Next, configure your GNU Radio app to write to this pipe (i.e. by using a file sink or file descriptor sink).
Then, all you need to do is configure your app to read from this file. Note that the GNU Radio app and your app need to run at the same time.
Of course, you could consider simply writing your app in GNU Radio. Using Python blocks, it's very easy to get started.
I want to stream my webcam in linux with VLC to the iPod. From what I've seen on the web, the easiest way is to use a web server and then access to it from the iPod like this:
NSString *url = #"http://www.example.com/path/to/movie.mp4";
MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc] initWithContentURL:[NSURL URLWithString:url]];
[moviePlayer play];
I have never used web services before and would like to know how i can achieve this whole process. Thank you
EDIT: After setting up the linux/vlc/segmenter, this is what i get in the terminal after running the comment from Warren and exiting vlc:
VLC media player 1.1.4 The Luggage (revision exported)
Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS")
Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE")
[0x87bc914] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
Blocked: call to setlocale(6, "")
Blocked: call to sigaction(17, 0xb71840d4, 0xb7184048)
Warning: call to signal(13, 0x1)
Warning: call to signal(13, 0x1)
Warning: call to srand(1309581991)
Warning: call to rand()
Blocked: call to setlocale(6, "")
(process:4398): Gtk-WARNING **: Locale not supported by C library.
Using the fallback 'C' locale.
Warning: call to signal(13, 0x1)
Warning: call to signal(13, 0x1)
Blocked: call to setlocale(6, "")
Could not open input file, make sure it is an mpegts file: -1
Help me understanding all this? thks!
The URL you show assumes the video is prerecorded.
For live HTTP streaming to an iOS device, the URL will instead end in .m3u or .m3u8, which is a common playlist format type. (It is an extended version of the Icecast playlist format, documented in this IETF draft.) This playlist tells the iOS device how to find the other files it will retrieve, in series, in order to stream the video.
The first tricky bit is producing the video stream. Unlike all other iOS compatible media files, live HTTP streaming requires an MPEG-2 transport stream (.ts) container, rather than an MPEG-4 part 14 container (.mp4, .m4v). The video codec is still H.264 and the audio AAC, as you might expect.
A command something like this should work:
$ vlc /dev/camera –intf=dummy –sout-transcode-audio-sync –sout='#transcode{\
vcodec=h264,venc=x264{\
aud,profile=baseline,level=30,keyint=30,bframes=0,ref=1,nocabac\
},\
acodec=mp4a,ab=56,deinterlace\
}:\
duplicate{dst=std{access=file,mux=ts,dst=-}}' > test.ts
This is all one long command. I've just broken it up for clarity, and to work around SO's formatting style limits. You can remove the backslashes and whitespace to make it a single long line, if you prefer. See the VLC Streaming HOWTO for help on figuring out what all that means, and how to adjust it.
The /dev/camera bit will probably have to be adjusted, and you may want to fiddle with the A/V encoding parameters based on Apple's best practices tech note (#TN 2224) to suit your target iOS device capabilites.
The next tricky bit is producing this playlist file and the video segment files from the live video feed.
Apple offers a program called mediastreamsgementer which does this, but it's not open source, it only runs on OS X, and it isn't even freely downloadable. (It comes as part of Snow Leopard, but otherwise, you have to be in Apple's developer program to download a copy.)
Chase Douglas has produced a basic segmenter which builds against libavformat from ffmpeg. There is a newer variant here which has various improvments.
To combine this with the vlc camera capture and encoding command above, replace the > test.ts part with something like this:
| segmenter - 10 test test.m3u8 http://www.example.com/path/to/
This pipes VLC's video output through the segmenter, which breaks the TS up into 10 second chunks and maintains the test.m3u8 playlist file that tells the iOS device how to find the segment files. The - argument tells the segmenter that the video stream is being piped into its standard input, rather than coming from a file. The URL fragment at the end gets prepended onto the file names mentioned in the M3U file.
Having done all that, the only adjustment that should be needed for your Cocoa Touch code is that it should be accessing test.m3u8 instead of movie.mp4.