cv2.error: OpenCV(4.2.0)demosaicing.cpp:1721 error: (-215:Assertion failed) scn == 1 && (dcn == 3 || dcn == 4) in function 'demosaicing' - python-3.x

I'm getting the following OpenCV-Python error while running a face recognition module in Python 3.8.2:
cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/demosaicing.cpp:1721: error: (-215:Assertion failed) scn == 1 && (dcn == 3 || dcn == 4) in function 'demosaicing'
Could someone explain the cause of this error and the solution to it?
Here is the code:
known_faces=[]
known_names=[]
for name in os.listdir(KNOWN_FACES_DIR):
for filename in os.listdir(f"{KNOWN_FACES_DIR}/{name}"):
image=face_recognition.load_image_file(f"{KNOWN_FACES_DIR}/{name}/{filename}")
encoding=face_recognition.face_encodings(image)[0]
known_faces.append(encoding)
known_names.append(name)
print("processing unknown faces!")
for filename in os.listdir(UNKNOWN_FACES_DIR):
print(filename)
image=face_recognition.load_image_file(f"{UNKNOWN_FACES_DIR}/{filename}")
locations= face_recognition.face_locations(image,model=MODEL)
encodings=face_recognition.face_encodings(image,locations)
image=cv2.cvtColor(image,cv2.COLOR_BAYER_BG2BGR)

I did a bit of testing and searching. I think the error is due to incorrect format of the pictures that I uploaded.
I found this definition from wikipedia
A demosaicing (also de-mosaicing, demosaicking or debayering) algorithm is a digital image process used to reconstruct a full color image from the incomplete color samples output from an image sensor overlaid with a color filter array (CFA). It is also known as CFA interpolation or color reconstruction.
I tried changing the code but to no avail. Then after seeing the definition thought it might be incorrect input from the picture. I think it's the type of format of of picture that I found incorrect.

Related

Error in Openfoam/ eg:motorbike/ snappyHexMesh

Error
--> FOAM FATAL ERROR:
hanging pointer at index 5 (size 6), cannot dereference
From function const T& Foam::UPtrList<T>::operator[](Foam::label) const [with T = Foam::fvPatchField<double>; Foam::label = int]
in file /home/ubuntu/OpenFOAM/OpenFOAM-8/src/OpenFOAM/lnInclude/UPtrListI.H at line 100.
FOAM aborting
............................................................................................
Command executed: reconstructPar
Please note
This particular file path is invalid
"/home/ubuntu/OpenFOAM/OpenFOAM-8/src/OpenFOAM/lnInclude/UPtrListI.H"
the file UPtrListI.H is at
"/opt/openfoam8/src/OpenFOAM/lnInclude/UPtrListI.H"
Another error: no P and U results wherein parafoam
{under solid color only vktcompisteindex and vktblockcolor, in field option u and p are marked -please see the image attached "para.png"}
Please note: Simulation completed without any errors
Why does this happen?
enter image description here
enter image description here

Pytorch, Unable to get repr for <class 'torch.Tensor'>

I'm implementing some RL in PyTorch and had to write my own mse_loss function (which I found on Stackoverflow ;) ).
The loss function is:
def mse_loss(input_, target_):
return torch.sum(
(input_ - target_) * (input_ - target_)) / input_.data.nelement()
Now, in my training loop, the first input is something like:
tensor([-1.7610e+10]), tensor([-6.5097e+10])
With this input I'll get the error:
Unable to get repr for <class 'torch.Tensor'>
Computing a = (input_ - target_) works fine, while b = a * a respectively b = torch.pow(a, 2) will fail with the error metioned above.
Does anyone know a fix for this?
Thanks a lot!
Update:
I just tried using torch.nn.functional.mse_loss which will result in the same error..
I had the same error,when I use the below code
criterion = torch.nn.CrossEntropyLoss().cuda()
output=output.cuda()
target=target.cuda()
loss=criterion(output, target)
but I finally found my wrong:output is like tensor([[0.5746,0.4254]]) and target is like tensor([2]),the number 2 is out of indice of output
when I not use GPU,this error message is:
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /opt/conda/conda-bld/pytorch-nightly_1547458468907/work/aten/src/THNN/generic/ClassNLLCriterion.c:93
Are you using a GPU ?
I had simillar problem (but I was using gather operations), and when I moved my tensors to CPU I could get a proper error message. I fixed the error, switched back to GPU and it was alright.
Maybe pytorch has trouble outputing the correct error when it comes from inside the GPU.

OpenCV Assertion failed type mismatch

I am using node-opencv and I want to get norm for two PNG images, but I have this instead:
OpenCV Error: Assertion failed (src1.size == src2.size && src1.type()
== src2.type()) in norm, file /build/opencv-RI6cfE/opencv-2.4.9.1+dfsg1/modules/core/src/stat.cpp,
line 1978
sizes are equal, types are different. type() and channels() methods for first Mat return 16 and 3 and for second 24 and 4.
I tried to do convertGrayscale with both images and got "Error: Image is no 3-channel" (ok, second has 4 channels, but first?)
I also tried to do second.convertTo(second,16) but got
libpng warning: iCCP: known incorrect sRGB profile
and there was no effect, second.type() still returned 24
Is there some way to convert Mat of any type to some kind of grayscale?
I plan to process a lot of images of different types, and I need to compare them with norm as grayscales.
here is my script:
var Promise = require("bluebird")
, fs = Promise.promisifyAll(require('fs'))
, cv = require('./opencv-build/node-opencv/lib/opencv');
var readImage = Promise.promisify(cv.readImage);
var ImageSimilarity = Promise.promisify(cv.ImageSimilarity);;
var imgdir = __dirname+'/img/';
var img_o = imgdir + 'src/walken.png';
var img_d = imgdir + 'dst/walken.png';
readImage(img_o).
then(function(first){
readImage(img_d)
.then(second=>{
second.convertTo(second,16);//no effect and >libpng warning: iCCP: known incorrect sRGB profile
console.log("first",
first.size(),
first.type(),
first.channels(),
"second",
second.size(),
second.type(),
second.channels());
//second.convertGrayscale();//doesn't work Error: Image is no 3-channel
console.log(first.norm(second, cv.Constants.NORM_L2));
});
});
and this is the output:
libpng warning: iCCP: known incorrect sRGB profile first [ 963, 1848 ]
16 3 second [ 963, 1848 ] 24 4 OpenCV Error: Assertion failed
(src1.size == src2.size && src1.type() == src2.type()) in norm, file
/build/opencv-RI6cfE/opencv-2.4.9.1+dfsg1/modules/core/src/stat.cpp,
line 1978 terminate called after throwing an instance of
'cv::Exception' what():
/build/opencv-RI6cfE/opencv-2.4.9.1+dfsg1/modules/core/src/stat.cpp:1978:
error: (-215) src1.size == src2.size && src1.type() == src2.type() in
function norm
Aborted (core dumped)
I think that libpng warning changes nothing.
P.S.
I tried to convert both images to grayscale in GIMP, type and channels of both images become 0/1 and norm works as expected, I can't understand why opencv can't do it.
Finally I switched from node-opencv (which works with OpenCV v2.3.1 bit not 3.x) to opencv4nodejs (which works with OpenCV v3+)
And now norm just works well. There is still libpng warning, but it works correctly.
So looks like OpenCV now handles channels mismatch by itself.
Here is my code for opencv4nodejs:
const cv = require('opencv4nodejs');
var imgdir = __dirname+'/img/';
var img_o = imgdir + 'src/walken.png';
var img_d = imgdir + 'dst/walken.png';
var first = cv.imread(img_o);
var second = cv.imread(img_d);
console.log(first.norm(second), cv.NORM_L2);
As you see, this code now works synchronously, so it's looks cleaner.

linearK error in seq. default() cannot be NA, NaN

I am trying to learn linearK estimates on a small linnet object from the CRC spatstat book (chapter 17) and when I use the linearK function, spatstat throws an error. I have documented the process in the comments in the r code below. The error is as below.
Error in seq.default(from = 0, to = right, length.out = npos + 1L) : 'to' cannot be NA, NaN or infinite
I do not understand how to resolve this. I am following this process:
# I have data of points for each data of the week
# d1 is district 1 of the city.
# I did the step below otherwise it was giving me tbl class
d1_data=lapply(split(d1, d1$openDatefactor),as.data.frame)
# I previously create a linnet and divided it into districts of the city
d1_linnet = districts_linnet[["d1"]]
# I create point pattern for each day
d1_ppp = lapply(d1_data, function(x) as.ppp(x, W=Window(d1_linnet)))
plot(d1_ppp[[1]], which.marks="type")
# I am then converting the point pattern to a point pattern on linear network
d1_lpp <- as.lpp(d1_ppp[[1]], L=d1_linnet, W=Window(d1_linnet))
d1_lpp
Point pattern on linear network
3 points
15 columns of marks: ‘status’, ‘number_of_’, ‘zip’, ‘ward’,
‘police_dis’, ‘community_’, ‘type’, ‘days’, ‘NAME’,
‘DISTRICT’, ‘openDatefactor’, ‘OpenDate’, ‘coseDatefactor’,
‘closeDate’ and ‘instance’
Linear network with 4286 vertices and 6183 lines
Enclosing window: polygonal boundary
enclosing rectangle: [441140.9, 448217.7] x [4640080, 4652557] units
# the errors start from plotting this lpp object
plot(d1_lpp)
"show.all" is not a graphical parameter
Show Traceback
Error in plot.window(...) : need finite 'xlim' values
coords(d1_lpp)
x y seg tp
441649.2 4649853 5426 0.5774863
445716.9 4648692 5250 0.5435492
444724.6 4646320 677 0.9189631
3 rows
And then consequently, I also get error on linearK(d1_lpp)
Error in seq.default(from = 0, to = right, length.out = npos + 1L) : 'to' cannot be NA, NaN or infinite
I feel lpp object has the problem, but I find it hard to interpret the errors and how to resolve them. Could someone please guide me?
Thanks
I can confirm there is a bug in plot.lpp when trying to plot the marked point pattern on the linear network. That will hopefully be fixed soon. You can plot the unmarked point pattern using
plot(unmark(d1_lpp))
I cannot reproduce the problem with linearK. Which version of spatstat are you running? In the development version on my laptop spatstat_1.51-0.073 everything works. There has been changes to this code recently, so it is likely that this will be solved by updating to development version (see https://github.com/spatstat/spatstat).

Exception with Convexity Defects

I am trying get Convexity Defects from the following code, but keep getting a unhandled exception.
What am I doing wrong?
vector<Vec4i> defects;
ContourPoly = vector<Point>(contour.size());
approxPolyDP( Mat(contour), ContourPoly,20, false );
convexHull(Mat(ContourPoly), HullPoints, false, true);
// The following line wont work
convexityDefects(Mat(ContourPoly),HullPoints,defects);
While HullPoints are of type vector<Point>
The exception is as follows
OpenCV Error: Assertion Failed (ptnum >3) is unknown function, file ..\..\..\src\opencv\modules\imgproc\src\contours.cpp, line 1969
But with vector<Point> defects; or vector<Vec4i> defects
I get the following exception
OpenCV Error: Assertion Failed (hull.checkVector(1,CV_32S) is unknown function, file ..\..\..\src\opencv\modules\imgproc\src\contours.cpp, line 1971
defects should be vector<Vec4i>
From the documentation:
each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0
First of all
vector<vector<Vec4i> > defects;
should be:
vector<vector<Vec4i> > defects( contour.size() );
Also, before calling convexityDefects function, check if the size of the HullPoints is greater than 3.

Resources