using matlab VideoReader get a null object - linux

My OS is redhat. When I installed matlab2014b and used VideoReader('1.avi'), I got this message:
>> VideoReader('1.avi')
ans =
VideoReader with properties:
General Properties:
Name: '1.avi'
Path: '/home/lyw/Videos'
Duration: 0
CurrentTime: 0
Tag: ''
UserData: []
Video Properties:
Width: 0
Height: 0
FrameRate: 0
BitsPerPixel: 0
VideoFormat: ''
However, when I user aviinfo('1.avi'), I can got video infomation like this:
aviinfo('1.avi')
> In aviinfo at 66
ans =
Filename: '/home/lyw/Videos/1.avi'
FileSize: 3554002
FileModDate: '26-Dec-2014 19:15:20'
NumFrames: 749
FramesPerSecond: 25
Width: 688
Height: 384
ImageType: 'truecolor'
VideoCompression: 'XVID'
Quality: 0
NumColormapEntries: 0
I want to know how can I read the video ???? HELP!!!

The AVI file appears to contain Xvid compressed data. On Linux, VideoReader uses Gstreamer to read videos. Are you sure you have suitable codecs installed on your system?
One quick way to verify this is to try the following on the Linux terminal (after copying file to /tmp) :
gst-launch-0.10 playbin2 uri=file:///tmp/1.avi
If this command succeeds, it indicates that gstreamer is able to read it in which case you should probably contact tech support.

Related

plupload, preserve_headers = false, and autorotation issue

I have a jpeg image where the EXIF Orientation flag = 6, or "Rotate 90 CW". Here's the pertinent data from exiftool:
---- ExifTool ----
ExifTool Version Number : 12.44
---- File ----
File Name : orig.jpg
Image Width : 4032
Image Height : 3024
---- EXIF ----
Orientation : Rotate 90 CW
Exif Image Width : 4032
Exif Image Height : 3024
---- Composite ----
Image Size : 4032x3024
Here's how IrfanView presents the image, with auto-rotate turned off:
Using the plupload "Getting Started" script from here, with preserve_headers = false, I get an image without EXIF headers - as expected - but rotated 180 degrees, which is unexpected. Again, the view with IrfanView:
Here's the "resize" snippet from the code:
resize: {
width: 5000,
height: 5000,
preserve_headers: false
}
Is there something I'm doing wrong? I would have expected a 90 CW rotation on upload with the EXIF stripped.
Dan
Edit: I'm using plupload V2.3.9
BUMP
I'm getting the exact same result with plupload using these exif samples on github. I chose landscape_6, because it's Orientation is the same as my example ("Rotate 90 CW", or Orientation tag value 6). Here's the before and after upload views using IrfanView with no autorotate, preserve_headers = false:
Aren't these canonical examples for demonstrating exif properties? Unless I'm missing some fundamental point, plupload is busted. I'd much rather it be the former, and someone can tell me the error of my ways.

Is there a way to check the volume level of all processes with pipewire/pulseaudio?

I'm trying to find a way to check if i have any desktop audio AND what processes is producing sounds.
After some searching i found a way to list all the sink input in pipewire/pulseaudio using pactl list sink-inputs however i have no idea if that input is muted or not
example output:
Sink Input #512
Driver: protocol-native.c
Owner Module: 9
Client: 795
Sink: 1
Sample Specification: float32le 2ch 48000Hz
Channel Map: front-left,front-right
Format: pcm, format.sample_format = "\"float32le\"" format.rate = "48000" format.channels = "2" format.channel_map = "\"front-left,front-right\""
Corked: yes
Mute: no
Volume: front-left: 43565 / 66% / -10.64 dB, front-right: 43565 / 66% / -10.64 dB
balance 0.00
Buffer Latency: 165979 usec
Sink Latency: 75770 usec
Resample method: speex-float-1
Properties:
media.name = "Polish cow (English Lyrics Full Version) - YouTube"
application.name = "Firefox"
native-protocol.peer = "UNIX socket client"
native-protocol.version = "35"
application.process.id = "612271"
application.process.user = "user"
application.process.host = "host"
application.process.binary = "firefox"
application.language = "en_US.UTF-8"
window.x11.display = ":0"
application.process.machine_id = "93e71eeba04e43789f0972b7ea0e4b39"
application.process.session_id = "2"
application.icon_name = "firefox"
module-stream-restore.id = "sink-input-by-application-name:Firefox"
The obvious thing would be looking at the Mute and Volume line but that is not reliable at all, currently the youtube video is paused but Mute is show as no and Volume is still no different from when the youtube video is actually playing.
I need the solution to be script-able since I'll muting certain thing when there is another process that is making sounds, and play it again when there is no sound, using bash script. If it is not possible on pipewire/pulseaudio but it is possible with another sound server then please do tell me.

Convert .mp4 to .mpeg4 using Converter

I have an MP4 file, which I would like to convert into an MPEG4 file. TO do this, I have found the PythonVideoConvert package. On the PyPI page, the following code is given:
from converter import Converter
conv = Converter()
info = conv.probe('test/test1.avi')
PATH = 'C:/Users/.../'
convert = conv.convert(PATH +'Demo.mp4', PATH + 'Demo.mpeg4', {
'format': 'mpeg4',
'audio': {
'codec': 'aac',
'samplerate': 11025,
'channels': 2
},
'video': {
'codec': 'hevc',
'width': 720,
'height': 400,
'fps': 25
}})
When I run this code, a convert object is created. However, there is no .mpeg4 video in the PATH directory.
Therefore, I have two questions:
Is the code above correct for converting a .mp4 file into a .mpeg4 file
What do I need to run to save the converted video as a .mpeg4 file?
Based on Selcuk's comment, I ran the following code:
for timecode in convert:
pass
This gives the error:
Traceback (most recent call last):
File "<ipython-input-60-14c9225c3ac2>", line 1, in <module>
for timecode in convert:
File "C:\Users\20200016\Anaconda3\lib\site-packages\converter\__init__.py", line 229, in convert
optlist = self.parse_options(options, twopass)
File "C:\Users\20200016\Anaconda3\lib\site-packages\converter\__init__.py", line 60, in parse_options
raise ConverterError(f'Requested unknown format: {str(f)}')
ConverterError: Requested unknown format: mpeg4
So, my suggested format seems incorrect. What can I do to convert a video into .mpeg4?
I don't think PythonVideoConverter is meant to be used in Windows.
I was getting an exception AttributeError: module 'signal' has no attribute 'SIGVTALRM', because SIGVTALRM is not a valid signal in Windows.
The default path of FFmpeg an FFprobe command line tools, also doesn't make sense for Windows.
We may still use the package in Windows, but it's recommended to set ffmpeg_path and ffprobe_path.
Example:
conv = Converter(ffmpeg_path=r'c:\FFmpeg\bin\ffmpeg.exe', ffprobe_path=r'c:\FFmpeg\bin\ffprobe.exe')
We also have to disable the timeout feature, by setting timeout=None argument.
mpeg4 is not a valid FFmpeg format, but we can still use it as a file extension.
(format is FFmpeg terminology usually applies container format).
When non-standart file extension is used, we have to set the format entry.
Setting 'format': 'mp4' creates MP4 file container (may be created with the non-standart .mpeg4 file extension).
Complete code sample:
from converter import Converter
conv = Converter(ffmpeg_path=r'c:\FFmpeg\bin\ffmpeg.exe', ffprobe_path=r'c:\FFmpeg\bin\ffprobe.exe')
#info = conv.probe('test/test1.avi')
PATH = 'C:/Users/Rotem/'
convert = conv.convert(PATH + 'Demo.mp4', PATH + 'Demo.mpeg4', {
'format': 'mp4', #'format': 'mpeg4',
'audio': {
'codec': 'aac',
'samplerate': 11025,
'channels': 2
},
'video': {
'codec': 'hevc',
'width': 720,
'height': 400,
'fps': 25
}},
timeout=None)
# https://pypi.org/project/PythonVideoConverter/
for timecode in convert:
print(f'\rConverting ({timecode:.2f}) ...')
We may see the media information of Demo.mpeg4 using MediaInfo tool:
General
Complete name : C:\Users\Rotem\Demo.mpeg4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/iso2/mp41)
File size : 207 KiB
Duration : 10 s 148 ms
Overall bit rate mode : Variable
Overall bit rate : 167 kb/s
Writing application : Lavf58.45.100
FileExtension_Invalid : braw mov mp4 m4v m4a m4b m4p m4r 3ga 3gpa 3gpp 3gp 3gpp2 3g2 k3g jpm jpx mqv ismv isma ismt f4a f4b f4v
Video
ID : 1
Format : HEVC
Format/Info : High Efficiency Video Coding
Format profile : Main#L3#Main
Codec ID : hev1
Codec ID/Info : High Efficiency Video Coding
Duration : 10 s 0 ms
Bit rate : 82.5 kb/s
Width : 720 pixels
Height : 400 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 25.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.011
Stream size : 101 KiB (49%)
Writing library : x265 3.4+28-419182243:[Windows][GCC 9.3.0][64 bit] 8bit+10bit+12bit
Encoding settings : ...
Color range : Limited
Codec configuration box : hvcC
Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : mp4a-40-2
Duration : 10 s 148 ms
Duration_LastFrame : -70 ms
Bit rate mode : Variable
Bit rate : 79.1 kb/s
Maximum bit rate : 128 kb/s
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 11.025 kHz
Frame rate : 10.767 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 98.0 KiB (47%)
Title : IsoMedia File Produced by Google, 5-11-2011
Language : English
Default : Yes
Alternate group : 1
In MediaInfo output, the MP4 file container applies "MPEG-4" format...
Note:
The HEVC video format applies H.265 video codec - in most cases the codec is considered to be more relevant then container.
'Requested unknown format: mpeg4'
*.mpeg4 is not valid container. mpeg4 is codec, *.something (avi, mp4, mov, mkv, ...) are containers.
basicly: codec.CONTAINER or your_mpeg4_video.mkv etc.
video codec (like mpeg4) handle only video, but you need more than only visual, you need audio, many audio tracks (eng, de, nl, 2.0, 5.1, 7.1 ...), subtitles, etc and these stuff are inside container.
install ffmpeg: https://ffmpeg.org/
try this basic script:
import subprocess
input_file = 'Demo.mp4'
output_file = 'Demo.mkv' # or .mp4, .mov, ...
ffmpeg_cli = "ffmpeg -i '{}' -vcodec libx265 '{}'".format(input_file, output_file)
subprocess.call(ffmpeg_cli, shell=True)
I don't know what are you doing (what you want, what are your expectations) but if you looking for way how to degrese size of video,
look here: https://github.com/MarcelSuleiman/convert_h264_to_h265
simple.

Opencv_createsamples fails with segmentation fault

I am currently trying to make a HAAR classifier. I have made an annotation file and have done everything as described in the official openCV tutorial: https://docs.opencv.org/3.3.0/dc/d88/tutorial_traincascade.html .
However, when I try to create the samples with opencv_createsamples, I get an error. My command:
opencv_createsamples -vec /some_dirs/samples/samples.vec -info /some_dirs/annotations/annotations.dat -w 8 -h 8 -num 100
The error:
Info file name: /home/nikifaets/code/pointsProcessing/annotations/annotations.dat
Img file name: (NULL)
Vec file name: /home/nikifaets/code/pointsProcessing/samples/samples.vec
BG file name: (NULL)
Num: 100
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 8
Height: 8
Max Scale: -1
RNG Seed: 12345
Create training samples from images collection...
OpenCV Error: Assertion failed (ssize.width > 0 && ssize.height > 0) in resize, file /build/opencv/src/opencv-3.4.0/modules/imgproc/src/resize.cpp, line 4044
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv/src/opencv-3.4.0/modules/imgproc/src/resize.cpp:4044: error: (-215) ssize.width > 0 && ssize.height > 0 in function resize
Aborted (core dumped)
However, if I try to do only two samples (no idea why exactly 2...), it runs and creates the .vec file, although my dataset includes about 300-400 pictures.
Pastebin of annotations.dat
Thank you in advance for the support!
Solved! Thank to Micka for suggesting a solution and being right. There was an error in the annotations file. One of the descriptions of a point of interest was 0 0 0 0 which is invalid. Always check your files carefully!

OpenCV Error: Assertion failed (_img.rows * _img.cols == vecSize)

I keep getting this error
OpenCV Error: Assertion failed (_img.rows * _img.cols == vecSize) in get, file /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/apps/traincascade/imagestorage.cpp, line 157
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/apps/traincascade/imagestorage.cpp:157: error: (-215) _img.rows * _img.cols == vecSize in function get
Aborted (core dumped)
when running opencv_traincascade. I run with these arguments: opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 1600 -numNeg 800 -numStages 10 -w 20 -h 20.
My project build is as follows:
workspace
|__bg.txt
|__data/ # where I plan to put cascade
|__info/
|__ # all samples
|__info.lst
|__jersey5050.jpg
|__neg/
|__ # neg images
|__opencv/
|__positives.vec
before I ran opencv_createsamples -img jersey5050.jpg -bg bg.txt -info info/info.lst -maxxangle 0.5 - maxyangle 0.5 -maxzangle 0.5 -num 1800
Not quite sure why I'm getting this error. The images are all converted to greyscale as well. The neg's are sized at 100x100 and jersey5050.jpg is sized at 50x50. I saw someone had a the same error on the OpenCV forums and someone suggested deleting the backup .xml files that are created b OpenCV in case the training is "interrupted". I deleted those and nothing. Please help! I'm using python 3 on mac. I'm also running these commands on an ubuntu server from digitalocean with 2GB of ram but I don't think that's part of the problem.
EDIT
Forgot to mention, after the opencv_createsamples command, i then ran opencv_createsamples -info info/info.lst -num 1800 -w 20 -h20 -vec positives.vec
I solved it haha. Even though I specified in the command the width and height to be 20x20, it changed it to 20x24. So the opencv_traincascade command was throwing an error. Once I changed the width and height arguments in the opencv_traincascade command it worked.
This error is observed when the parameters passed is not matching with the vec file generated, as rightly put by the terminal in this line
Assertion failed (_img.rows * _img.cols == vecSize)
opencv_createsamples displays the parameters passed to it for training. Please verify of the parameters used for creating samples are the same that you passed. I have attached the terminal log for reference.
mayank#mayank-Aspire-A515-51G:~/programs/opencv/CSS/homework/HAAR_classifier/dataset$ opencv_createsamples -info pos.txt -num 235 -w 40 -h 40 -vec positives_test.vec
Info file name: pos.txt
Img file name: (NULL)
Vec file name: positives_test.vec
BG file name: (NULL)
Num: 235
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 40 <--- confirm
Height: 40 <--- confirm
Max Scale: -1
RNG Seed: 12345
Create training samples from images collection...
Done. Created 235 samples

Resources