Cobalt raspi-2_gold is unable to play video - audio

When running raspi-2_gold of cobalt, it is unable to play the selected video. It is stuck at the black screen.
What works:
It is able to load all the thumbnails initially
Able to select a video
All video controls working fine
Did try nerd stats, NO frames received, displaying Codecs, ID, viewport, Volume, Connection speed, Buffer health.
also all thumbnails below the video are shown
What doesn't:
No video as well as audio
Tried videos of all resolution, results are same = NO Video & Audio
Questions:
Are there specific certificate requirements
Any audio/video library requirements
ERROR MESSAGE
[2278:1362215416:ERROR:player_internal.cc(134)] Not implemented reached in void SbPlayerPrivate::SetVolume(double)
[2280:1362347860:WARNING:thread_set_name.cc(36)] Thread name "omx_video_decoder" was truncated to "omx_video_decod"
[2279:1362349124:INFO:player_worker.cc(136)] Try to seek to timestamp 0
[2280:1362352181:INFO:open_max_component_base.cc(82)] Opened "OMX.broadcom.video_decode" with port 130 and 131
[2283:1363620269:INFO:alsa_audio_sink_type.cc(241)] alsa::AlsaAudioSink enters idle loop
[2282:1363554339:FATAL:open_max_component.cc(216)] Check failed: false. OMX_EventError received with 80001000 0
starboard::raspi::shared::open_max::OpenMaxComponent::OnErrorEvent() [0x17eedd8]
starboard::raspi::shared::open_max::OpenMaxComponentBase::OnEvent() [0x17f34ec]
starboard::raspi::shared::open_max::OpenMaxComponentBase::EventHandler() [0x17f375c]
Caught signal: SIGABRT (6)
<unknown> [0x75cc6180]
<unknown> [0x75cc4f70]
Aborted

Thanks to #Andrew Top for the answer. Setting the memory split to 256 MB worked. Videos were playing quite well. The HQ videos and 360 degree ones need up to 256 MB of GPU, 720p and below could be played in at least 200 Mb of GPU.

Related

Godot - Video Player frozen on launch

Good Morning Everyone,
I'm trying to use Godot for a very simple app.
Open to start screen with a looping video.
If button 1 is pressed, change scene, start video 2, return to start at end of video 2
If button 2 is pressed, ^ for video 3
I'm using WebM as my video sources. Sizes are 14.6 MB, 36.8 MB, 37.4 MB.
I have autoplay selected.
However, the video is frozen on frame 1 and no audio plays. the log prints True for is_playing()
Any advice?
Godot WebM support had many of issues, ranging from slow performance to crashes. Consequently it has been removed for Godot 4.0 to reduce maintenance cost (read more). Whether or not it will make a return as an official plugin is yet to be determined, but some kind of plugin is likely to be the path forward for WebM support in Godot.
Although Godot 3.x may still get WebM patches. I would encourage to convert the video to OGV. There are plenty of free tool that can do this conversion.

Real duration of audios played through browsers

I need to play 4 audios through a browser web.
These audios last 150ms, 300ms, 450ms and 600ms.
I don't care about latency (if an audio is played 100 ms after it's not that important for my purpose).
But I do care about the duration of these audios: is the 150ms audio last exactly 150ms or there is an error due to the audio board or other components?
I know for sure that there is an error (I see a test using a Mac).
My question is: can anyone show me a paper, an article or anything that talks about the duration and test different setting or tell me if this error is always (Windows, Mac, old device, new device) very small (less than 10ms for example).
In other words: if I play an audio of 100ms how long does it really last (100ms? more? less?)?
In what manner is the sound not lasting the correct amount of time?
Does the beginning or the end get cut off?
Does the sound play back slower or faster than it should?
In my experience, I've never heard an error with playback rates caused by the browser or sound boards. But I have come across situations where a sound is played back with a different audio format than which is was encoded. For example, a sound encoded at 48000 fps played back at 44100 fps will take longer to execute, but will be very close to the original in pitch (maybe about a 1/2 step lower). I recommend as a diagnostic step to confirm the audio format used at each end. How to do so will depend on the systems being used.

Opencv stereo cameras capture and framerate limits

I am trying to get pairs of images out of a Minoru stereo webcam, currently through opencv on linux.
It works fine when I force a low resolution:
left = cv2.VideoCapture(0)
left.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 320)
left.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 240)
right = cv2.VideoCapture(0)
right.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 320)
right.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 240)
while True:
_, left_img = left.read()
_, right_img = right.read()
...
However, I'm using the images for creating depth maps, and a bigger resolution would be good. But if I try leaving the default, or forcing resolution to 640x480, I'm hitting errors:
libv4l2: error turning on stream: No space left on device
I have read about USB bandwith limitations but:
this happens on the first iteration (first read() from right)
I don't need anywhere near 60 or even 30 FPS, but couldn't manage to reduce "requested FPS" via VideoCapture parameters (if this makes sense)
adding sleeps don't seem to help, even between the left/right reads
strangely if I do much processing (in the while loop), I start noticing "lag": things happening in the real world get shown much later on the images read. This would suggest that actually there is a buffer somewhere that can and does accumulate several images (a lot)
I tried a workaround of creating and releasing a separate VideoCapture for each image read, but this is a bit too slow overall (< 1FPS), and more importantly, image are too much out of sync for working on stereo matching.
I'm trying to understand why this fails, in order to find solutions. It looks like v4l is allocating a single global too-small buffer, used by the 2 capture objects somehow.
Any help would be appreciated.
I had the same problem and found this answer - https://superuser.com/questions/431759/using-multiple-usb-webcams-in-linux
Since both the minoru cameras show the format as 'YUYV', this is likely a USB bandwidth issue. I lowered the frames per second to 20 (didn't work at 24) and I can see both the 640x480 images.

Increase frame rate of webcams

I have two webcam (Logitech C310) connected to beagleboard-xm (Angstrom). I am capturing video from both the webcam and displaying it on two separate widow using OpenCV. I am using two threads (one for each webcam ) to capture and display video from webcams.
The problem is:
Frame rate that I am getting from both cameras is very low (around 3 fps with resolution 640 X 480)
While running the code sometimes it works fine but sometimes gives the following error :
Gtk:ERROR:gtkaccelmap.c:113:_gtk_accel_map_init: assertion failed: (accel_entry_ht == NULL)
Aborted

Android : Update on UI Thread very fast

I have an application that plays back video frame by frame. This is all working. However it needs to have playback Audio too, when Audio and Video running simultaneously it seems, Video lagging behind audio,
Logic i am using to display the video frame as follows
ProcessVideoThread(){
// Read the data from socket,
// decode it : this is going to be in side libvpx library, after decoding i am getting raw
// bitmap data
// After getting raw bitmap data, use some mechanism to update the image,
// here i tried runOnUIThread, handler but
}
Now what is happening, it seems UI thread is getting very late chance to update the image, i.e. libvpx is taking approx 30 ms to decode the image and through runOnUIThread, its taking 40 more ms to update the image, but inside UI thread i am updating it.
Can anyone advise me, how can i reduce the delay to update the image on the UI thread.
Looks interesting. If i was in your situation, i would examine the following methods.
1) Try using Synchronization between Audio An Video Threads.
2) Try reducing the Video Frames where audio is lagging and reduce the audio frequency where Video is lagging.
You can do the same in the following way.
int i;
if(i%2==0)
ShowFrame();
else
i++
What this will do is that it will straight away reduce the Video Speed from 24 12. So the Audio will now match with video. But yaa quality will be at stake as i already mentioned to you. This method is called frequency scaling. Widely used method to sync audio and Video.
Refer to the following for clear understanding and ways in which you can sync audio and video. This is in relation to the FFMPEG. I dont know how much of it you will be able to use, but definitely it will help you to get some idea.
http://dranger.com/ffmpeg/tutorial05.html
All the Best..

Resources