I am new to android and I need to capture frames at 60fps for RTSP streaming.
I am using Android native camera API and reading frames on onImageAvailable callback of AImageReader.
ACAMERA_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES returns : {15,15}, {20,20}, {24,24}, {30,30}, {7,30}.
If I try to explicitly set fps range to {30,60} via ACAMERA_CONTROL_AE_TARGET_FPS_RANGE, I can see only 30 frames in a sec in onImageAvailable callback.
Please share opinion on how we can achieve 60fps using AImageReader for YUV frames
ACAMERA_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES returns your only available choices. {30,60} is not one of your choices.
Related
I have an idea that I have been working on, but there are some technical details that I would love to understand before I proceed.
From what I understand, Linux communicates with the underlying hardware through the /dev/. I was messing around with my video cam input to zoom and I found someone explaining that I need to create a virtual device and mount it to the output of another program called v4loop.
My questions are
1- How does Zoom detect the webcams available for input. My /dev directory has 2 "files" called video (/dev/video0 and /dev/video1), yet zoom only detects one webcam. Is the webcam communication done through this video file or not? If yes, why does simply creating one doesn't affect Zoom input choices. If not, how does zoom detect the input and read the webcam feed?
2- can I create a virtual device and write a kernel module for it that feeds the input from a local file. I have written a lot of kernel modules, and I know they have a read, write, release methods. I want to parse the video whenever a read request from zoom is issued. How should the video be encoded? Is it an mp4 or a raw format or something else? How fast should I be sending input (in terms of kilobytes). I think it is a function of my webcam recording specs. If it is 1920x1080, and each pixel is 3 bytes (RGB), and it is recording at 20 fps, I can simply calculate how many bytes are generated per second, but how does Zoom expect the input to be Fed into it. Assuming that it is sending the strean in real time, then it should be reading input every few milliseconds. How do I get access to such information?
Thank you in advance. This is a learning experiment, I am just trying to do something fun that I am motivated to do, while learning more about Linux-hardware communication. I am still a beginner, so please go easy on me.
Apparently, there are two types of /dev/video* files. One for the metadata and the other is for the actual stream from the webcam. Creating a virtual device of the same type as the stream in the /dev directory did result in Zoom recognizing it as an independent webcam, even without creating its metadata file. I did finally achieve what I wanted, but I used OBS Studio virtual camera feature that was added after update 26.0.1, and it is working perfectly so far.
I'm developing an app where I want to stream video from URL. I'm currently using ExoPlayer for streaming and it is working fine but it has a delay of around 5 seconds before the video loads and starts playing. Is there any way to reduce this start time or some way like how TikTok streams their videos on the go. There's no lag involved in TikTok. Could anyone give some workaround for this?
I am quite a newbie with exoplayer, but I have learnt this:
I assume you are using a recyclerview to load a lot of videos.
And also that you are playing the video via a url.
WHAT YOU CAN DO:
a. the solution is to precache the video before it even appears on the screen. for example, whilst the video at position 0 is playing, you precache and prebuffer position 1.
Hence, you always precache/prebuffer getAdapterPosition() + 1;
This makes exoplayer load the url even before you get to the video.
I want to built a SoundWave sampling an audio stream.
I read that a good method is to get amplitude of the audio stream and represent it with a Polygon. But, suppose we have and AudioGraph with just a DeviceInputNode and a FileOutpuNode (a simple recorder).
How can I get the amplitude from a node of the AudioGraph?
What is the best way to periodize this sampling? Is a DispatcherTimer good enough?
Any help will be appreciated.
First, everything you care about is kind of here:
uwp AudioGraph audio processing
But since you have a different starting point, I'll explain some more core things.
An AudioGraph node is already periodized for you -- it's generally how audio works. I think Win10 defaults to periods of 10ms and/or 20ms, but this can be set (theoretically) via the AudioGraphSettings.DesiredSamplesPerQuantum setting, with the AudioGraphSettings.QuantumSizeSelectionMode = QuantumSizeSelectionMode.ClosestToDesired; I believe the success of this functionality actually depends on your audio hardware and not the OS specifically. My PC can only do 480 and 960. This number is how many samples of the audio signal to accumulate per channel (mono is one channel, stereo is two channels, etc...), and this number will also set the callback timing as a by-product.
Win10 and most devices default to 48000Hz sample rate, which means they are measuring/output data that many times per second. So with my QuantumSize of 480 for every frame of audio, i am getting 48000/480 or 100 frames every second, which means i'm getting them every 10 milliseconds by default. If you set your quantum to 960 samples per frame, you would get 50 frames every second, or a frame every 20ms.
To get a callback into that frame of audio every quantum, you need to register an event into the AudioGraph.QuantumProcessed handler. You can directly reference the link above for how to do that.
So by default, a frame of data is stored in an array of 480 floats from [-1,+1]. And to get the amplitude, you just average the absolute value of this data.
This part, including handling multiple channels of audio, is explained more thoroughly in my other post.
Have fun!
I have an app that delivers video content using HTTP Live Streaming. I want the app to retrieve the appropriate resolution based on the devices screen size (either 4x3 or 16x9). I ran Apple's tool to create the master .m3u8 playlist file (variantplaylistcreator) and got the following:
#EXTM3U
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=248842,BANDWIDTH=394849,CODECS="mp4a.40.2, avc1.4d4028",RESOLUTION=480x360
4x3/lo/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=384278,BANDWIDTH=926092,CODECS="mp4a.40.2, avc1.4d4028",RESOLUTION=480x360
4x3/mid/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=787643,BANDWIDTH=985991,CODECS="mp4a.40.2, avc1.42801e",RESOLUTION=480x360
4x3/hi/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=249335,BANDWIDTH=392133,CODECS="mp4a.40.2, avc1.4d4028",RESOLUTION=640x360
16x9/lo/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=384399,BANDWIDTH=950686,CODECS="mp4a.40.2, avc1.4d4028",RESOLUTION=640x360
16x9/mid/prog_index.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=780648,BANDWIDTH=987197,CODECS="mp4a.40.2, avc1.42801e",RESOLUTION=640x360
16x9/hi/prog_index.m3u8
This does cause my live stream to switch between video quality correctly, but it seems to randomly pick whether it uses a 4x3 or 16x9 resolution.
Is there a way to have it select the correct dimensions automatically, or do I need to have multiple playlist files and have a device request a specific one? For example, if on an iPad do I need to detect that it's screen has a 4x3 dimension and request a 4x3_playlist.m3u8 that only has the 480x360 resolution option?
Update 2017:
Keeping the same aspect-ratio is only a recommendation in the latest HLS authoring guide:
1.33. All video variants SHOULD have identical aspect ratios.
Original answer:
Audio/Video Stream Considerations:
Video aspect ratio must be exactly
the same, but can be different dimensions.
Apple Technical Note TN2224 - Best Practices for Creating and Deploying HTTP Live Streaming Media for the iPhone and iPad
Select a playlist based on the User-Agent instead.
I have an application that plays back video frame by frame. This is all working. However it needs to have playback Audio too, when Audio and Video running simultaneously it seems, Video lagging behind audio,
Logic i am using to display the video frame as follows
ProcessVideoThread(){
// Read the data from socket,
// decode it : this is going to be in side libvpx library, after decoding i am getting raw
// bitmap data
// After getting raw bitmap data, use some mechanism to update the image,
// here i tried runOnUIThread, handler but
}
Now what is happening, it seems UI thread is getting very late chance to update the image, i.e. libvpx is taking approx 30 ms to decode the image and through runOnUIThread, its taking 40 more ms to update the image, but inside UI thread i am updating it.
Can anyone advise me, how can i reduce the delay to update the image on the UI thread.
Looks interesting. If i was in your situation, i would examine the following methods.
1) Try using Synchronization between Audio An Video Threads.
2) Try reducing the Video Frames where audio is lagging and reduce the audio frequency where Video is lagging.
You can do the same in the following way.
int i;
if(i%2==0)
ShowFrame();
else
i++
What this will do is that it will straight away reduce the Video Speed from 24 12. So the Audio will now match with video. But yaa quality will be at stake as i already mentioned to you. This method is called frequency scaling. Widely used method to sync audio and Video.
Refer to the following for clear understanding and ways in which you can sync audio and video. This is in relation to the FFMPEG. I dont know how much of it you will be able to use, but definitely it will help you to get some idea.
http://dranger.com/ffmpeg/tutorial05.html
All the Best..