GStreamer Custom-Plugin and alsasink Unable to Detect Format - linux

I'm developing a GStreamer plugin following the GStreamer Plugin Writer's Guide and using gst-element-maker from the gst-plugins-bad repository with the base class set to basetransform. As a starting point I have developed a plugin named MyFilter that simply passes the data along the chain. The plugin is working, but when I run gst-launch with the debug level set to 2, I get the following error:
alsa gstalsa.c:124:gst_alsa_detect_formats: skipping non-int format.
I am executing the command:
gst-launch --gst-debug-level=2 --gst-plugin-load=./src/libgstmyfilter.la filesrc location=./song.mp3 ! flump3dec ! audioconvert ! audioresample ! myfilter ! alsasink
From the base class that was created by gst-element-maker I have removed calls to gst_pad_new_from_static_template() because the calls were returning errors reporting that the sink and source pad were already created, I have set the chain function using gst_pad_set_chain_function(), have implemented the function gst_myfilter_transform_caps(), and have added code to handle the GST_EVENT_NEWSEGMENT event. The STATIC_CAPS string I am using for source and sink are:
"audio/x-raw-int, "
"rate = (int) { 16000, 32000, 44100, 48000 }, "
"channels = (int) [ 1, 2 ], "
"endianness = (int) BYTE_ORDER, "
"signed = (boolean) true, "
"width = (int) 16, "
"depth = (int) 16"
I return the caps from gst_myfilter_transform_caps() using gst_pad_get_fixed_caps_func(GST_BASE_TRANSFORM_SRC[[/SINK]]_PAD(trans)). The pad caps are set using the default code created by gst-element-maker in gst_myfilter_base_init() using:
gst_element_class_add_pad_template(element_class, gst_static_pad_template_get(&gst_myfilter_sink_template));
Is there a problem with the GstBaseTransform class? I have another custom filter which does not use the GstBaseTransform class and does not have this problem. I am using GStreamer v0.10.36 with Ubuntu 12.04.

Related

Delay on Gstreamer video rendering

I am working on an Gstreamer application that renders decoded frames. The input is from another application (gets frames from a network camera) that gives .H264 encoded frames.
The gstreamer pipeline I use is as follows:
appsrc ! h264parse ! avdec_h264 ! videoconvert ! ximagesink
The appsrc creates the GstBuffer and timestamps it, starting from 0.
The rendered output seems approx. 2seconds delayed.
How do I reduce the latency in this case?
Any help is appreciated.
The appsrc's properties are set (using gst_object_set() )as below:
stream-type = 0
format = GST_FORMAT_TIME
is-live = true
max-latency = 0
min-latency = 0
Update:
I tried sending a latency event of -2 seconds (experimental) to the pipeline
GstClockTime latency = (-2 * gst_util_uint64_scale_int (1, GST_SECOND, 1));
GstEvent *event = gst_event_new_latency (latency);
gst_element_send_event (pipeline, event);
This did not help, it made the output really choppy.
As of now, this is my best answer.
Use do-timestamp property of appsrc Gstreamer element.
This eliminated the latency to almost <200ms.

What causes "resource temporarily unavailable" in v4l2

I have compiled adv7180 driver available here.
I am unloading the ov5642 cameradriver(which in my case is built-in) and loading the adv7180_tvin module and after I am loading mxcv4l2_capture module which creates video0 in /dev/.
(dmesg command says: "mxc camera on IPU2_CSI1 registered as video0")
But when I try to access video0 with v4l2-ctl I got a message "resource temporarily unavailable" or when I am using gstreamer I got message "Can not open /dev/video0" (but the device is really created).
Is that a problem in device tree settings or it can be caused by something else? Which tools should I use to find out what causes this issue?
My device tree settings look like below:
&i2c3{
adv7180: adv7180#20{
compatible = "adv,adv7180";
reg = <0x20>;
clocks = <&clks IMX6QDL_CLK_CKO2>;
clock-names = "csi_mclk";
pwn-gpios = <&gpio3 10 GPIO_ACTIVE_LOW>;
ipu_id = <1>;
csi_id = <1>;
mclk = <24000000>;
mclk_source = <0>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_hummingboard2_parallel>;
cvbs = <1>;
};
};
I need to add that before adv7180 I was using above settings for ov5642 camera (excluding cvbs setting) and everything worked properly.
EDIT:
Ok I got one clue.
When I load modules in dmesg message "mxc_v4l2_master_attach: ipu(0:1)/csi(1:1)/mipi(0:0) doesn't match" shows.
But it only happens when ipu_id=<1> in v4l2_cap device tree settings and in adv7180 settings. When i change ipu_id to ipu_id=<0> in v4l2 settings and adv7180 dmesg now shows "parallel attach to IPU1 CSI1 and I can access the /dev/video0 succesfully with v4l2-ctl tool.
But In my case there is only one possibility to use IPU2_CSI1.
Why can't I set IPU2 to adv7180 when I was using it successfully to ov5642 ?
As per my knowledge i.MX6 having two IPUs. I think by default IPU1 parallel interface is not enabled in the board file. So you need to check the IOMUXC_GPR1 register setting (bit 19 and 20) for IPU/CSI1 and pass the csi_id in your camera driver.
As you are using the parallel interface so check your pin muxing setting as well in your device tree. (which is not required for serial interface.)
Edit:
There are two ways which you can follow to update the register setting from the kernel space (boardfile or camera driver) itself:
1. From the board file:
struct regmap *gpr
gpr = syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr");
regmap_update_bits(gpr, IOMUXC_GPR1, 1 << 20, 1 << 20);
2. From the boardfile or camera driver
void __iomem *va_ipu2_address;
u32 reg_val;
va_ipu2_address = ioremap(0x20e0004,0xe0004);
reg_val = readl(va_ipu2_address);
/* Enable parallel interface to IPU2 CSI1. */
writel(reg_val | 1 << 20, va_ipu2_address);
Thanks for Your answer.
My pinmuxing looks like this:
&iomuxc{
hummingboard2{
pinctrl_hummingboard2_parallel: hummingboard2_parallel{
fsl,pins = <
MX6QDL_PAD_EIM_A24__IPU2_CSI1_DATA19 0x0b0b1
MX6QDL_PAD_EIM_A23__IPU2_CSI1_DATA18 0x0b0b1
MX6QDL_PAD_EIM_A22__IPU2_CSI1_DATA17 0x0b0b1
MX6QDL_PAD_EIM_A21__IPU2_CSI1_DATA16 0x0b0b1
MX6QDL_PAD_EIM_A20__IPU2_CSI1_DATA15 0x0b0b1
MX6QDL_PAD_EIM_A19__IPU2_CSI1_DATA14 0x0b0b1
MX6QDL_PAD_EIM_A18__IPU2_CSI1_DATA13 0x0b0b1
MX6QDL_PAD_EIM_A17__IPU2_CSI1_DATA12 0x0b0b1
MX6QDL_PAD_EIM_DA11__IPU2_CSI1_HSYNC 0x0b0b1
MX6QDL_PAD_EIM_DA12__IPU2_CSI1_VSYNC 0x0b0b1
MX6QDL_PAD_EIM_A16__IPU2_CSI1_PIXCLK 0x0b0b1
MX6QDL_PAD_EIM_DA10__GPIO3_IO10 0x400130b1
>;
};
};
};
and it's been working succesively with ov5642 camera.
No I see that the adv7180 driver does not take an ipu_id as an argument from device tree so I think it is using the default ipu which is (I think) IPU1.
I've been playing arround how to change settings in IOMUXC_GPR1. Bit 20 needs to be set ("enable parallel interface to IPU2 CSI1). But have got no more ideas how to do it in device tree.
Ok. I found it !
I couldn't set bit 20 in IOMUXC_GPR1 register using mach-imx6q.c file so I did it this way:
in console:
sudo devmem2 0x20e0004
and read the existing value (which was in my case 0x48643005). Then I set bit 20 to one ("1") so I got 0x48743005 and I put this value into the register:
sudo devmem2 0x20e0004 w 0x48743005
next I loaded adv7180_tvin and mxc_v4l2_capture modules and captured frames using gsreamer:
gst-launch-1.0 imxv4l2videosrc device=/dev/video0 ! imxipuvideotransform ! autovideosink deinterlace=true
Everything works great ! Thanks for help !

Reading console output from mplayer to parse track's position/length

When you run mplayer, it will display the playing track's position and length (among some other information) through, what I'd assume is, stdout.
Here's a sample output from mplayer:
MPlayer2 2.0-728-g2c378c7-4+b1 (C) 2000-2012 MPlayer Team
Cannot open file '/home/pi/.mplayer/input.conf': No such file or directory
Failed to open /home/pi/.mplayer/input.conf.
Cannot open file '/etc/mplayer/input.conf': No such file or directory
Failed to open /etc/mplayer/input.conf.
Playing Bomba Estéreo - La Boquilla [Dixone Remix].mp3.
Detected file format: MP2/3 (MPEG audio layer 2/3) (libavformat)
[mp3 # 0x75bc15b8]max_analyze_duration 5000000 reached
[mp3 # 0x75bc15b8]Estimating duration from bitrate, this may be inaccurate
[lavf] stream 0: audio (mp3), -aid 0
Clip info:
album_artist: Bomba Estéreo
genre: Latin
title: La Boquilla [Dixone Remix]
artist: Bomba Estéreo
TBPM: 109
TKEY: 11A
album: Unknown
date: 2011
Load subtitles in .
Selected audio codec: MPEG 1.0/2.0/2.5 layers I, II, III [mpg123]
AUDIO: 44100 Hz, 2 ch, s16le, 320.0 kbit/22.68% (ratio: 40000->176400)
AO: [pulse] 44100Hz 2ch s16le (2 bytes per sample)
Video: no video
Starting playback...
A: 47.5 (47.4) of 229.3 (03:49.3) 4.1%
The last line (A: 47.5 (47.4) of 229.3 (03:49.3) 4.1%) is what I'm trying to read but, for some reason, it's never received by the Process.OutputDataReceived event handler.
Am I missing something? Is mplayer using some non-standard way of outputting the "A:" line to the console?
Here's the code in case it helps:
Public Overrides Sub Play()
player = New Process()
player.EnableRaisingEvents = True
With player.StartInfo
.FileName = "mplayer"
.Arguments = String.Format("-ss {1} -endpos {2} -volume {3} -nolirc -vc null -vo null ""{0}""",
tmpFileName,
mTrack.StartTime,
mTrack.EndTime,
100)
.CreateNoWindow = False
.UseShellExecute = False
.RedirectStandardOutput = True
.RedirectStandardError = True
.RedirectStandardInput = True
End With
AddHandler player.OutputDataReceived, AddressOf DataReceived
AddHandler player.ErrorDataReceived, AddressOf DataReceived
AddHandler player.Exited, Sub() KillPlayer()
player.Start()
player.BeginOutputReadLine()
player.BeginErrorReadLine()
waitForPlayer.WaitOne()
KillPlayer()
End Sub
Private Sub DataReceived(sender As Object, e As DataReceivedEventArgs)
If e.Data = Nothing Then Exit Sub
If e.Data.Contains("A: ") Then
' Parse the data
End If
End Sub
Apparently, the only solution is to run mplayer in "slave" mode, as explained here: http://www.mplayerhq.hu/DOCS/tech/slave.txt
In this mode we can send commands to mplayer (via stdin) and the response (if any) will be sent via stdout.
Here's a very simple implementation that displays mplayer's current position (in seconds):
using System;
using System.Threading;
using System.Diagnostics;
using System.Collections.Generic;
namespace TestMplayer {
class MainClass {
private static Process player;
public static void Main(string[] args) {
String fileName = "/home/pi/Documents/Projects/Raspberry/RPiPlayer/RPiPlayer/bin/Electronica/Skrillex - Make It Bun Dem (Damian Marley) [Butch Clancy Remix].mp3";
player = new Process();
player.EnableRaisingEvents = true;
player.StartInfo.FileName = "mplayer";
player.StartInfo.Arguments = String.Format("-slave -nolirc -vc null -vo null \"{0}\"", fileName);
player.StartInfo.CreateNoWindow = false;
player.StartInfo.UseShellExecute = false;
player.StartInfo.RedirectStandardOutput = true;
player.StartInfo.RedirectStandardError = true;
player.StartInfo.RedirectStandardInput = true;
player.OutputDataReceived += DataReceived;
player.Start();
player.BeginOutputReadLine();
player.BeginErrorReadLine();
Thread getPosThread = new Thread(GetPosLoop);
getPosThread.Start();
}
private static void DataReceived(object o, DataReceivedEventArgs e) {
Console.Clear();
Console.WriteLine(e.Data);
}
private static void GetPosLoop() {
do {
Thread.Sleep(250);
player.StandardInput.Write("get_time_pos" + Environment.NewLine);
} while(!player.HasExited);
}
}
}
I found the same problem with another application that works more or less in a similar way (dbPowerAmp), in my case, the problem was that the process output uses Unicode encoding to write the stdout buffer, so I have to set the StandardOutputEncoding and StandardError to Unicode to be able start reading.
Your problem seems to be the same, because if "A" cannot be found inside the output that you published which clearlly shows that existing "A", then probably means that the character differs when reading in the current encoding that you are using to read the output.
So, try setting the proper encoding when reading the process output, try setting them to Unicode.
ProcessStartInfo.StandardOutputEncoding
ProcessStartInfo.StandardErrorEncoding
Using "read" instead of "readline", and treating the input as binary, will probably fix your problem.
First off, yes, mplayer slave mode is probably what you want. However, if you're determined to parse the console output, it is possible.
Slave mode exists for a reason, and if you're half serious about using mplayer from within your program, it's worth a little time to figure out how to properly use it. That said, I'm sure there's situations where the wrapper is the appropriate approach. Maybe you want to pretend that mplayer is running normally, and control it from the console, but secretly monitor file position to resume it later? The wrapper might be easier than translating all of mplayers keyboard commands into slave mode commands?
Your problem is likely that you're trying to use "readline" from within python on an endless line. That line of output contains \r instead of \n as the line separator, so readline will treat it as a single endless line. sed also fails this way, but other commands (such as grep) treat \r as \n under some circumstances.
Handling of \r is inconsistent, and can't be relied on. For instance, my version of grep treats \r as \n when matching IF output is a console, and uses \n to seperate the output. But if output is a pipe, it treats it as any other character.
For instance:
mplayer TMBG-Older.mp3 2>/dev/null | tr '\r' '\n' | grep "^A: " | sed 's/^A: *\([0-9.]*\) .*/\1/' | tail -n 1
I'm using "tr" here to force it to '\n', so other commands in the pipe can deal with it in a consistent manner.
This pipeline of commands outputs a single line, containing ONLY the ending position in seconds, with decimal point. But if you remove the "tr" command from this pipe, bad things happen. On my system, it shows only "0.0" as the position, as "sed" doesn't deal well with the '\r' line separators, and ALL the position updates are treated as the same line.
I'm fairly sure python doesn't handle \r well either, and that's likely your problem. If so, using "read" instead of "readline" and treating it like binary is probably the correct solution.
There are other problems with this approach though. Buffering is a big one. ^C causes this command to output nothing, mplayer must quit gracefully to show anything at all, as pipelines buffers things, and buffers get discarded on SIGINT.
If you really wanted to get fancy, you could probably cat several input sources together, tee the output several ways, and REALLY write a wrapper around mplayer. A wrapper that's fragile, complicated, and might break every time mplayer is updated, a user does something unexpected, or the name of the file being played contains something weird, SIGSTOP or SIGINT. And probably other things that I haven't though of.

Gstreamer custom videosink for playbin

I'm trying to create a custom videosink for playbin in gstreamer 1.6.3
The final idea is to have some videomixer inside the videosink to be able to do.. stuff.
At the moment i would like to simply create a custom Bin that incapsulates a videosink.
The relevant parts of the code at the moment are:
def get_videomix_bin(self):
mix_bin = Gst.Bin.new('sink')
sink = Gst.ElementFactory.make('glimagesink')
gp = Gst.GhostPad.new('vs', sink.get_static_pad('sink'))
mix_bin.add(sink)
mix_bin.add_pad(gp)
return mix_bin
def get_pipeline(self, videosink):
"""A basic playbin pipeline pipeline"""
self.pipeline = Gst.ElementFactory.make('playbin')
videosink = self.get_videomix_bin()
self.pipeline.set_property('video-sink', videosink)
self.fireEvent('pipeline-created')
This code is part of a bigger software that I cannot post whole. But if i comment out the self.pipeline.set_property('video-sink', videosink) part, it works, so i tend to think that the problem is somwhere there.
It... well it basically don't work. The pipeline won't start.
At GST_DEBUG=2 i get this warning
0:00:00.758103367 15560 0x7f81000050a0 WARN uridecodebin gsturidecodebin.c:939:unknown_type_cb:<uridecodebin0> warning: No decoder available for type 'video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)3.1, profile=(string)main, codec_data=(buffer)014d401fffe1001c674d401fe8802802dd80b501010140000003004000000c83c60c448001000468ebaf20, width=(int)1280, height=(int)720, framerate=(fraction)25/1, pixel-aspect-ratio=(fraction)1/1, parsed=(boolean)true'.
You have to call the ghostpad on the videosink bin "sink", not "vs". The pad names are part of the API, and sink elements are expected to have a pad called "sink".

Linux ALSA Driver using channel count 3

Am running my ALSA Driver on Ubuntu 14.04, 64bit, 3.16.0-30-generic Kernel.
Hardware is proprietary hardware, hence cant give much details.
Following is the existing driver implementation:
Driver is provided sample format, sample rate, channel_count as input via module parameter. (Due to requirements need to provide inputs via module parameters)
Initial snd_pcm_hardware structure for playback path.
#define DEFAULT_PERIOD_SIZE (4096)
#define DEFAULT_NO_OF_PERIODS (1024)
static struct snd_pcm_hardware xxx_playback =
{
.info = SNDRV_PCM_INFO_MMAP |
SNDRV_PCM_INFO_INTERLEAVED |
SNDRV_PCM_INFO_MMAP_VALID |
SNDRV_PCM_INFO_SYNC_START,
.formats = SNDRV_PCM_FMTBIT_S16_LE,
.rates = (SNDRV_PCM_RATE_8000 | \
SNDRV_PCM_RATE_16000 | \
SNDRV_PCM_RATE_48000 | \
SNDRV_PCM_RATE_96000),
.rate_min = 8000,
.rate_max = 96000,
.channels_min = 1,
.channels_max = 1,
.buffer_bytes_max = (DEFAULT_PERIOD_SIZE * DEFAULT_NO_OF_PERIODS),
.period_bytes_min = DEFAULT_PERIOD_SIZE,
.period_bytes_max = DEFAULT_PERIOD_SIZE,
.periods_min = DEFAULT_NO_OF_PERIODS,
.periods_max = DEFAULT_NO_OF_PERIODS,
};
Similar values for captures side snd_pcm_hardware structure.
Please, note that the following below values are replaced in playback open entry point, based on the current audio test configuration:
(user provides audio format, audio rate, ch count via module parameters as inputs to the driver, which are refilled in snd_pcm_hardware structure)
xxx_playback.formats = user_format_input
xxx_playback.rates = xxx_playback.rate_min, xxx_playback.rate_max = user_sample_rate_input
xxx_playback.channels_min = xxx_playback.channels_max = user_channel_input
Similarly values are re-filled for capture snd_pcm_hardware structure in capture open entry point.
Hardware is configured for clocks based on channel_count, format, sample_rate and driver registers successfully with ALSA layer
Found aplay/arecord working fine for channel_count = 1 or 2 or 4
During aplay/arecord, in driver when "runtime->channels" value is checked, it reflects the channel_count configured, which sounds correct to me.
Record data matches with played, since its a loop back test.
But when i use channel_count = 3, Both aplay or arecord reports
"Broken configuration for this PCM: no configurations available"!! for a wave file with channel_count '3'
ex: Playing WAVE './xxx.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 3
ALSA lib pcm_params.c:2162:(snd1_pcm_hw_refine_slave) Slave PCM not usable
aplay: set_params:1204: Broken configuration for this PCM: no configurations available
With Following changes I was able to move ahead a bit:
.........................
Method1:
Driver is provided channel_count '3' as input via module parameter
Modified Driver to fill snd_pcm_hardware structure as payback->channels_min = 2 & playback->channels_min = 3; Similar values for capture path
aplay/arecord reports as 'channel count not available', though the wave file in use has 3 channels
ex: aplay -D hw:CARD=xxx,DEV=0 ./xxx.wav Playing WAVE './xxx.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 3
aplay: set_params:1239: Channels count non available
Tried aplay/arecord with plughw, and aplay/arecord moved ahead
arecord -D plughw:CARD=xxx,DEV=0 -d 3 -f S16_LE -r 48000 -c 3 ./xxx_rec0.wav
aplay -D plughw:CARD=xxx,DEV=0 ./xxx.wav
Recording WAVE './xxx_rec0.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 3
Playing WAVE './xxx.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 3
End of Test
During aplay/arecord, In driver when "runtime->channels" value is checked it returns value 2!!! But played wavefile has ch count 3...
When data in recorded file is checked its all silence
.........................
Method2:
Driver is provided channel_count '3' as input via module parameter
Modified Driver to fill snd_pcm_hardware structure as playback->channels_min = 3 & playback->channels_min = 4; Similar values for capture path
aplay/arecord reports as 'channel count not available', though the wave file in use has 3 channels
Tried aplay/arecord with plughw, and aplay/arecord moved ahead
During aplay/arecord, In driver when "runtime->channels" value is checked it returns value 4!!! But played wavefile has ch count 3...
When data in recorded file is checked its all silence
.........................
So from above observations, the runtime->channels is either 2 or 4, but never 3 channels was used by alsa stack though requested. When used Plughw, alsa is converting data to run under 2 or 4 channel.
Can anyone help why am unable to use channel count 3.
Will provide more information if needed.
Thanks in Advance.
A period (and the entire buffer) must contain an integral number of frames, i.e., you cannot have partial frames.
With three channels, one frame has six bytes. The fixed period size (4096) is not divisible by six without remainder.
Thanks CL.
I used period size 4092 for this particular test case with channel count 3, and was able to do loop back successfully (without using plughw).
One last question, when I used plughw earlier, and when runtime->channels was either 2 or 4, why was the recorded data not showing?

Resources