I set up the licode platform by referring licode documentation
in this platform how can I adjust the resolution of videos (in the basic example). I try to adjust the resolution by ./licode/extras/basic_example/public/script.js file. But I was unable to do it. Any kind of help would be appreciated.
I'm assuming by resolution that you mean the input video constraints.
To do this you would specify it when instantiating a stream object. The example below defines a minimum of 640 by 480 pixels, and a maximum of 1280 by 720:
var stream = {audio: true, video: true, data: true, videoSize: [640, 480, 1280, 720]};
You can find further documentation on it here: http://lynckia.com/licode/client-api.html#stream
extract:
You can also specify some constraints about the video size when creating a stream. In order to do this you need to include a videoSize parameter that is an array with the following format: [minWidth, minHeight, maxWidth, maxHeight]
Related
I am converting video files using an elastic transcoder. AWS Lambda function get video file from s3 bucket and convert it according to PresetId.
But, I need to compare video file resolution with PresetId. If the video file resolution is higher than the PresetId video resolution, then convert this video file, otherwise do not need to convert all video files.
Do you have access to ffmpeg/ffprobe/ffplay from AWS - is it possible to call them and take their console output? I'm not sure about what's allowed in AWS, but on Desktop you could call ffprobe etc. - it could return text or even JSON.
Many ways are suggested here: Getting video dimension / resolution / width x height from ffmpeg
One of the suggested ways:
ffprobe -v error -show_entries stream=width,height -of csv=p=0:s=x input.m4v
1280x720
I am trying to analyze a wav file in python and get the rms value from the wav. I am using audioop.rms to get the value from the wav. I went to do this and I did not know what fragment and width stood for. I am new to audioop and hope somebody can explain this. I am also wondering if there is any better way to do this in python.
Update: I have done some research and I found out fragment stood for the wav file. I still need to figure out what width means.
A fragment is just a chunk of data. Width is the size in bytes that the data is organized. ex 8bits data has width 1, 16bits has 2 and so on.
```
import alsaaudio, audioop
self.input = alsaaudio.PCM(alsaaudio.PCM_CAPTURE,alsaaudio.PCM_NONBLOCK)
self.input.setchannels(1)
self.input.setrate(8000)
self.input.setformat(alsaaudio.PCM_FORMAT_S16_LE)
self.input.setperiodsize(300)
length, data = self.input.read()
avg_i = audioop.avg(data,2)
```
In the example i am setting alsa capture card to use a S16_LE signed 16bits Little Endian, so I have to set width to be 2. For the fragment is just the data captured by alsa. In your case the wav file is your data.
I have created 8 bit yuv player for format YUY2 packed using SDL lib,some part of code:
handle->texture = SDL_CreateTexture(handle->renderer, SDL_PIXELFORMAT_YUY2, SDL_TEXTUREACCESS_STREAMING, width, height);
SDL_UpdateTexture(handle->texture, NULL,pDisplay->Ydata,(handle->width*2));
in that while creating texture,pixel format is given SDL_PIXELFORMAT_YUY2 and update texture pitch in twice of width. So it is playing fine.
But when it comes to 10 bit YUV, it plays disturbed and greenish video.
What I have tried is changed pitch to (handle->width*2 * 2) but no success
also someone suggested to convert 10bit value to 8bit but I don't want to do that.
Please help me to play 10bit YUY2 packed format YUV.
Is SDL support more than 8 bit depth pixel rendering ?
I'm using libvlc to play video files while downloading them. My problem is that libvlc_media_player_get_length returns nothing.
Also I thought about calculating an approximation using the bitrate and the file size, but I don't find how to get the bitrate with libvlc.
Is there a function for that ?
Thanks
I finaly find this way:
libvlc_media_stats_t stats;
if (libvlc_media_get_stats(vlcmedia, &stats))
{
long p=libvlc_media_player_get_time(vlcmediaplayer);
if (p)
bitrate=stats.i_read_bytes/p;
}
libvlc_media_tracks_get returns one or more structures containing track information.
If the track type is audio or video there is libvlc_audio_track_t.i_bitrate or libvlc_video_track_t.i_bitrate accordingly.
This API was added at version 2.1.0 of LibVLC, see http://git.videolan.org/?p=vlc.git;a=commit;h=cd5345a00009f2fc571c23509a025331ad24fc87.
VLC 2.1.0 was released in September 2013.
Hey I'm following Derek Molloy's tutorial:
http://derekmolloy.ie/beaglebone/beaglebone-video-capture-and-image-processing-on-embedded-linux-using-opencv/#comment-30209
Using a Logitech c310 webcam, that is supported by the Linux UVC drivers.
root#beaglebone:/boneCV# v4l2-ctl --all
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : UVC Camera (046d:081b)
Bus info : usb-musb-hdrc.1.auto-1
Driver version: 3.8.13
Capabilities : 0x84000001
Video Capture
Streaming
Format Video Capture:
Width/Height : 640/480
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 1280
Size Image : 614400
Colorspace : SRGB
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 640, Height 480
Default : Left 0, Top 0, Width 640, Height 480
Pixel Aspect: 1/1
Video input : 0 (Camera 1: ok)
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
Priority: 2
So we can see it is read by the Beagleboard no problem.
When I try to capture the video, I simply get this error:
root#beaglebone:/boneCV# ./capture -f -c 600 -o > output.raw
Force Format 1
select timeout
Looking at other threads, people don't seem to know how to answer this question, can anyone with experience on this project help me out?
If you compare the image size of YUYV and that of MJPEG you will notice that the former is much larger than the latter. BBB has limited bandwidth on its USB port so thats why you cannot operate your camera in YUYV format. MJPEG outputs compressed video stream. Different opencv versions tend to change the resolution that you set with v4l2-ctl command so you have to change the resolution in the boneCV code. I'm not sure how its done in c++ but in python, check Changing camera resolution in opencv code. According to Matthew, Bandwidth limitations he tested and found out the bandwidth to be 13.2MB/s.
Well I can say the issue is resolved. After rebooting and trying the camera again after several hours, it magically seems to work.
The only thing I changed is the capture call to be simpler it is now:
./capture -o > output.raw
I haven't converted the raw file to mpeg4 yet, since I'm installing ffmpeg as I type this, however I can confirm that grabbing still images is working. The filesize of the output.raw is confirmation that it is indeed capturing video as well. If anyone finds this and is stuck, I will be glad to give assistance as much as I can.
Strangely, it only seems to capture video after using the picture grabber program first. So there must be something the grabber is initializing that isn't happening in the capture.
UPDATE: Ok it turns out that the YUYV video mode is not working but the mjpeg does, putting it into grabber mode initialized mjpeg mode and that's why it worked. Not sure why YUYV doesn't work yet.