Getting series of errors while streaming with RTSP URL in IP camera - rtsp

I'm using opencv-python to get the live stream of IP camera using RTSP. Getting series of errors including video decoding error. Is there a way to get rid of these errors and throw some light in converting the live stream into images without any errors
error while decoding MB <some digit>, bytestream -<some no.>
A non-intra slice in an IDR NAL unit. decode_slice_header error. no frame
reference picture missing during reorder. Missing reference picture, default is 65634
non-existing PPS 1 referenced. Invalid NAL unit 1, skipping.
[rtsp # 0x41e10e0] Undefined type (30)
SEI type 157 size 1464 truncated at 307
Note: I use Hikvision 2cd123p-i3model and I use this URL for stream rtsp://username:pwd#ip_address:554/Streaming/Channels/101/

Related

Strange behavior of OTP gen_tcp with settings {packet,4} and using NodeJS "frame-stream'' for TCP communication

I've been struggling for a while to get my messages framed correctly between my NodeJS server and my erlang gen_tcp server. I was using {packet,line} successfully until I had to send large data messages and needed to switch to message size framing.
I set gen_tcp to {packet,2}
and I'm using the library from:
https://github.com/davedoesdev/frame-stream
for the NodeJS tcp decode side. It is ALSO set to packet size option 2
and I have tried packet size option 4.
I saw for any messages with a length under 127 characters this setup works well, but any messages longer than this has a problem.
I ran a test by sending longer and longer messages from gen_tcp and then reading out the first four bytes received on the NodeJS side:
on message 127:
HEADER: 0 0 0 127
Frame length 127
on message 128:
HEADER: 0 0 0 239 <----- This should be 128
Frame length 239 <----- This should be 128
Theories:
Some character encoding mismatch since it's on the number 128 (likely?)
Some error in either gen_tcp or the library (highly unlikely?)
Voodoo magic curse that makes me work on human-rights day (most likely)
Data from wireshark shows the following:
The header bytes are encoded properly by gen_tcp past 128 characters since the hex values proceed as follows:
[00][7e][...] (126 length)
[00][7f][...] (127 length)
[00][80][...] (128 length)
[00][81][...] (129 length)
So it must be that the error lies when the library on the NodeJS side calls the Node readUInt16BE(0) or readUInt32BE(0) functions. But I checked the endieness and both are big-endian.
If the header bytes are [A,B] then, in binary, this error occurs after
[00000000 01111111]
In other words, readUInt16BE(0) reads [000000000 10000000] as 0xef ? which is not even an endian option...?
Thank you for any help in how to solve this.
Kind Regards
Dale
I figured it out, the problem was caused by setting the socket to receive on UTF-8 encoding which supports ascii up to 127.
Dont do this: socket.setEncoding('utf8').
It seems obvious now but that one line of code is hard to spot.

Cobalt raspi-2_gold is unable to play video

When running raspi-2_gold of cobalt, it is unable to play the selected video. It is stuck at the black screen.
What works:
It is able to load all the thumbnails initially
Able to select a video
All video controls working fine
Did try nerd stats, NO frames received, displaying Codecs, ID, viewport, Volume, Connection speed, Buffer health.
also all thumbnails below the video are shown
What doesn't:
No video as well as audio
Tried videos of all resolution, results are same = NO Video & Audio
Questions:
Are there specific certificate requirements
Any audio/video library requirements
ERROR MESSAGE
[2278:1362215416:ERROR:player_internal.cc(134)] Not implemented reached in void SbPlayerPrivate::SetVolume(double)
[2280:1362347860:WARNING:thread_set_name.cc(36)] Thread name "omx_video_decoder" was truncated to "omx_video_decod"
[2279:1362349124:INFO:player_worker.cc(136)] Try to seek to timestamp 0
[2280:1362352181:INFO:open_max_component_base.cc(82)] Opened "OMX.broadcom.video_decode" with port 130 and 131
[2283:1363620269:INFO:alsa_audio_sink_type.cc(241)] alsa::AlsaAudioSink enters idle loop
[2282:1363554339:FATAL:open_max_component.cc(216)] Check failed: false. OMX_EventError received with 80001000 0
starboard::raspi::shared::open_max::OpenMaxComponent::OnErrorEvent() [0x17eedd8]
starboard::raspi::shared::open_max::OpenMaxComponentBase::OnEvent() [0x17f34ec]
starboard::raspi::shared::open_max::OpenMaxComponentBase::EventHandler() [0x17f375c]
Caught signal: SIGABRT (6)
<unknown> [0x75cc6180]
<unknown> [0x75cc4f70]
Aborted
Thanks to #Andrew Top for the answer. Setting the memory split to 256 MB worked. Videos were playing quite well. The HQ videos and 360 degree ones need up to 256 MB of GPU, 720p and below could be played in at least 200 Mb of GPU.

opencv 3.2 (python 3.4.3) with usbcam + raspberry pi 3

I am doing a project related to video streaming at client (socket programming) and I am getting two errors:
Error 1:
Many people talked about this error, But I am not able to get a clear view about this error - programming legends help me to solve this below error.
Corrupt JPEG data: 1 extraneous bytes before marker 0xd0
Corrupt JPEG data: 1 extraneous bytes before marker 0xd0
Corrupt JPEG data: 2 extraneous bytes before marker 0xd0
Corrupt JPEG data: 3 extraneous bytes before marker 0xd0
Corrupt JPEG data: 1 extraneous bytes before marker 0xd2
Corrupt JPEG data: 1 extraneous bytes before marker 0xd4
Corrupt JPEG data: 1 extraneous bytes before marker 0xd0
Corrupt JPEG data: 1 extraneous bytes before marker 0xd0
Corrupt JPEG data: 1 extraneous bytes before marker 0xd1
Even-though the above error is coming the output is good.
why this corrupt error comes and how to solve it and it is not affecting the stream - why?
Error 2:
PICTURE
What the problem I am facing is, when I run my Server after successful run, it is showing the first run error (check my picture link-provided). After that error I just terminate the Server and re-run it it will work perfectly - this will repeat every time - why?
Once the Server started, it is perfect and the frame to client transfer won't stop till I terminate the server, and the Server won't be getting terminated when client presses Ctrl+c, and I can terminate client and re-run it to connect with Server as many as times I want till Server is turned on without the first run Error.
I can also say that, Once the Server is running good, Next run will be the Error. And after that Error, the Next run will be a successful stream.
chek this out for my server and client program

Set content-length when converting video stream to audio (w/ FFMPEG & Node.js)

So I'm building a program that requires that I take a video and convert it to audio. I'm currently streaming the audio directly to the browser via node.js, but I've run into a major problem: I don't know how to find out how many bytes my audio is. As a result, the browser keeps throwing net::ERR_CONTENT_LENGTH_MISMATCH when I don't get the right content-length. I've tried several strategies, all of which have failed:
Computing the size manually (Seconds * bitrate(kbps) * (1024 / 8)).
This produces an approximate answer, since I only know the length down to the nearest couple of seconds. Even though I'm relatively close, I still end up getting the same MISMATCH error.
Piping the Stream to a buffer, getting the buffer's length, and piping the buffer to the browser
This works, but it can take 15-20 seconds to load each song. It's incredibly slow and puts a considerably larger load on the server

OMXCodec::onEvent -- OMX Bad Parameter

I have been trying to use OMXCodec through Stagefright. I have implemented the code for ICS version of Android.I have two classes CustomDataSource which derives MediaSource and another is CustomOmxCodec which calls OMXCodec::Create method and execute read operation to decode h264 frames. I have tested this implementation on a device with omx.google.video.avc software decoder and it works fine. Now, when I try to run the same implementation on an android phone with hardware h264 decode, it returns error on read call. The error is as below:
[OMX.MTK.VIDEO.DECODER.AVC] ERROR (0x80001005, 0)
0x80001005 is for OMX_ErrorBadParameter.
and I get the error code -1103 on read operation.
I tried various parameters but no success.
The complete log is as below:
[OMX.MTK.VIDEO.DECODER.AVC] mVideoInputErrorRate (0.000000)
!##!>>create tid (21087) O<XCodec mOMXLivesLocally=0, mIsVideoDecoder (1), mIsVideoEncoder (0), mime(video/avc)
[OMX.MTK.VIDEO.DECODER.AVC] video dimensions are 640X480
mSupportesPartialFrames 1 err 0
[OMX.MTK.VIDEO.DECODER.AVC] allocating 10 buffers of size 65536 on input port.
[OMX.MTK.VIDEO.DECODER.AVC] mMemHeapBase = 0x00E8C288, mOutputBufferPoolMemBase=0x51F8E000, size = 9578848
[OMX.MTK.VIDEO.DECODER.AVC] ERROR (0x80001005, 0)
OMXCodec::onEvent--OMX Bad Parameter!!
Read Error : -1103
I'd grateful for any direction on this.
From the question, the hardware codec i.e. OMX.MTK.VIDEO.DECODER.AVC is not supporting one of the parameters being passed as part of the configuration steps.
From OMXCodec::create, configureCodec will be invoked which internally invokes a lot of other functions. Since the error is coming as part of OMXCodec::onEvent, one of the possible scenarios could be that the component encountered an error while decoding the first few bytes of the first frame.
Specifically, when the component encounters SPS and PPS (part of codec specific data), the component would typically trigger a portSettingsChanged. From your response, I feel that during this process, there is some error and hence, onEvent has been triggered.
Please share more logs to analyze further.
The MTK H264 decoder need the parameter csd-0 and csd-1 to init the decoder(You can get some information at http://developer.android.com/reference/android/media/MediaCodec.html). csd-0 and csd-1 stands for SPS and PPS of H264.I have asked a MTK engineer and he said that we can use the code below to set these two parameters.
byte[] sps = {0,0,0,1,103,100,0,40,-84,52,-59,1,-32,17,31,120,11,80,16,16,31
,0,0,3,3,-23,0,0,-22,96,-108};
byte[] pps = {0,0,0,1,104,-18,60,-128};
MediaFormat mFormat = MediaFormat.createVideoFormat("video/avc", width, height);
mFormat.setByteBuffer("csd-0", ByteBuffer.wrap(sps));
mFormat.setByteBuffer("csd-1", ByteBuffer.wrap(pps));
Maybe that's whay we got the OMX Bad Parameter error message.
From the logs and mapping the same to the implemented code, I feel that the following is happening
[OMX.MTK.VIDEO.DECODER.AVC] allocating 10 buffers of size 65536 on input port.
This step allocates the buffers on the input port of the decoder
From the flow of code, after input port buffers are allocated, the buffers on output port are allocated from nativeWindow through allocateOutputBuffersFromNativeWindow.
One of the steps as part of this method implementation is to increase the number of buffers on the output port by 2 and set the same to the OMX component as shown here.
I feel your error might be stemming from this specific point as nBufferSize is a read-only parameter of OMX_IndexParamPortDefinition index. Please refer to the OMX Standard, section 3.1.3.12.1, page 83, which clearly shows that nBufferSize is a read-only parameter.
It appears that your OMX component may be strictly OMX compliant component, whereas in Android from ICS onwards, certain deviations are expected. This could be one of the potential causes of your error.
P.S: If you could share more information, we could help further.

Resources