Our project has been rejected twice because of 9.4(from App Store Review Guidelines).They said we didn't use the Http Live Streaming protocol...
we have already download the Validator Tools as they mentioned in the document(http://developer.apple.com/library/ios/#technotes/tn2235/_index.html),but when I use it in the Terminal,the result is not act like what they wrote in the document(http://developer.apple.com/library/ios/#technotes/tn2224/_index.html),it just showed the first three lines information,that't sth about the Average Segment Bitrate,but there's no result about the Audio Bitrate and Video Bitrate,so...I don't know why,is there sth wrong with the tool?or I used the wrong method?
For clearing my question,I'll list the result below:
this is the official document result show,as you can find it in "http://developer.apple.com/library/ios/#technotes/tn2224/_index.html"
Validating http://devimages.apple.com/iphone/samples/bipbop/gear3/prog_index.m3u8against iOS 3.1.0
Average segment duration: 8.77 seconds
Average segment bitrate: 510.05 kbit/s
Average segment structural overhead: 96.37 kbit/s (18.89 %)
Video codec: avc1
Video resolution: 480x360 pixels
Video frame rate: 29.97 fps
Average video bitrate: 407.76 kbit/s
H.264 profile: Baseline
H.264 level: 2.1
Audio codec: aac
Audio sample rate: 22050 Hz
Average audio bitrate: 5.93 kbit/s
and the following message is my testing result:
Average segment duration: 9.79 seconds
Average segment bitrate: 74109.89 bps
Thanks for answering in advance!
Related
In my country we ever use the 25fps(PAL) for video, and for audio.
Yesterday I record a tv movie with vdr(mpeg-ts format) and mediainfo report this for audio and video
Audio is mp2, video h264
Audio
Format : MPEG Audio
Format version : Version 1
Format profile : Layer 2
Codec ID : 4
Duration : 3 h 58 min
Bit rate mode : Constant
Bit rate : 128 kb/s
Channel(s) : 2 channels
Sampling rate : 48.0 kHz
Frame rate : 41.667 FPS (1152 SPF)
Compression mode : Lossy
Delay relative to video : -406 ms
Stream size : 219 MiB (6%)
Video
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High#L3
Format settings : CABAC / 4 Ref Frames
Format settings, CABAC : Yes
Format settings, Reference frames : 4 frames
Format settings, picture structure : Frame
Codec ID : 27
Duration : 3 h 58 min
Bit rate : 1 915 kb/s
Width : 720 pixels
Height : 576 pixels
Display aspect ratio : 16:9
Frame rate : 25.000 FPS
How is possible audio/video are in sync with a FPS of about 50fps on audio?
If I want to recode it, I have to recode audio on 25fps?
TLDR: You don't have to worry about it. Two different meanings for "frames per second".
MP3 is an interesting file format. It doesn't have a global header that represents the entire file. Instead MP3 is a concatenation of small individual files called "frames". Each frame is a few milliseconds in length. That's why you can often just chop an MP3 file in half and the second half plays just fine. It's what also enables VB3 MP3 to exist. The sample rate or encoding parameters can change at any point in the file.
So your particular MP3 has a "frame rate" of 41.667 frames per second. Now notice the SPF value of 1152 in parentheses. That's "samples per frame". If you do the math: 1152 samples/frame * 41.667 frames/second` is almost exactly 48000 samples per second. Identical to the sampling rate presented by the mediainfo tool.
When a media player plays a video file, it will basically render the video stream separate from the audio stream, so there's very little effort it needs to keep the different sample rates in sync.
As to your question about resampling for video. The encoding tool you use will do the right thing. The FPS for MP3 is completely orthogonal to the video FPS.
I encode a new H.264 video muxed with audio in MP4 file.
How to correctly calculate PTS and DTS for AVPacket and AVFrame for video and audio?
I generate new video frames and new audio from my source. There are no original PTS/DTS information. I know frame rate which I need to use (time_base).
Assuming your frame rate is constant. And after setting stream time bases correctly. Start both pts's from zero (0). Audio pts will increase by 'sample per frame' for each frame. This is typically audio_sample_rate / frame_rate (i.e. 48000/60 = 800).
For the video, things are different and somewhat simpler. Video pts will increase same amount of 'Video frame duration' per frame. Use this cheat sheet to calculate the duration:
FPS Frame duration
23.98 2002
24.00 2000
25.00 2000
29.97 2002
30.00 2000
50.00 1000
59.94 1001
60.00 1000
Yes these somewhat hacky but will work.
I've made a movie for RPG Maker XV ace with just music in the background.
The program only allows .ogv movies (OGG, THEORA) to be played. I have no problem with the video quality, however, the sound is distorted and "jumps"
(like when we were playing records in the '90s..) when there are high pitched or reverberating instruments.
The following are my settings for the movie output:
Container: OGG
Video Codec: Theora
Audio Codec: Vorbis
Bit rate: 160 (16 bit)
Sample rate: 44100 (44.1 kHz)
System: Windows 10
Video Editor: Blender 2.79
The .ogg audio files are perfect when played in RPG Maker Ace by themselves just as audio files. The problem only exists with the audio in .ogv movies.
I have already tried increasing the bit rate and the frame rate but to no avail.
Does anyone know the standard audio requirements for audio in movies for RPG Maker Ace?
Thanks for your help!
I actually disobeyed what RPG Maker suggests in their help menu and made some changes as follow:
CONTAINER: OGG
VIDEO CODEC: H.264
OUTPUT QUALITY: CONSTANT BIT RATE (This is very important)
ENCODING SPEED: Medium
AUDIO CODEC: VORBIS
BIT RATE: 200
FRAME RATE: 48000
Now it works like a charm!
I have this data:
Bit speed: 276 kilobytes/seconds
File size: 6.17 MB
Channels: 2
Layer: 3
Frequency: 44100 HZ
How can I retrieve the audio duration in seconds or milliseconds?
You can't. To get the duration you need the sampling rate in samples per second but also the number of channels (mono, stereo, etc.), and the sample length in bytes (1 to 3 usually). And unless it is a raw audio there is also additional data that takes some space. 276kpbs does not help here. If it is a mP3 the file is compressed, you simply can't just by looking at the file size.
Currently working on project where live video streaming required for which i'm using MediaRecorder API for audio / video recording via webcam and speakers.
MediaRecorder API provides blob every 500 ms which we use for writing/appending .webM file on server by websocket NodeJs.
var options = { mimeType: 'video/webm', audioBitsPerSecond : 128000, videoBitsPerSecond : 2500000};
mediaRecorder = new MediaRecorder(stream, options);
On writing/appending .webM file for say 17 min a .webM file of 6.1 MB is generated on server with below video properties with 0 duration. And on running this .webm
file via HTML5 video element, recording is being played but the time duration for video is not being calculated / displayed.
Duration: 0
Video:
Dimension: 640 X 480
Codec: VP8
Framerate: 30 frames per second
Bitrate: N/A
Audio:
Codec: Vorbris
Channels: Mono
Sample Rate: 48000 HZ
Bitrate: 239 kbps
Please suggest as why the vidoe duration property value is 0 and how the file size of 6.1 MB can be reduced.