eglCreateImageKHR returning EGL_BAD_ATTRIBUTE error - linux

I have implemented hardware decoding on Linux using VAAPI via FFmpeg. Since I have an OpenGL application, I am converting the decoded VAAPI surfaces to OpenGL textures using vaCopySurfaceGLX. This is working fine except that there is a copy (on the GPU) that is made. I was told that I could directly use the VAAPI surface as OpenGL textures using EGL. I have looked at some examples (mainly Kodi source code) but I'm not able to create the EGLImageKHR. The function eglCreateImageKHR returns 0, and when I check for errors, I get a EGL_BAD_ATTRIBUTE error but I don't understand why.
Below is how I'm converting the VAAPI surface.
During initialization, I set up EGL this way:
// currentDisplay comes from call to glXGetCurrentDisplay() and is also used when getting the VADisplay like this: vaGetDisplay(currentDisplay)
EGLint major, minor;
_eglDisplay = eglGetDisplay(currentDisplay);
eglInitialize(_eglDisplay, &major, &minor);
eglBindAPI(EGL_OPENGL_API);
Then later, to create my EGL image, this is what I do:
// _vaapiContext.vaDisplay comes from vaGetDisplay(currentDisplay)
// surface is the VASurfaceID of the surface I want to use in OpenGL
vaDeriveImage(_vaapiContext.vaDisplay, surface, &_vaapiContext.vaImage);
VABufferInfo buf_info;
memset(&buf_info, 0, sizeof(buf_info));
buf_info.mem_type = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME;
vaAcquireBufferHandle(_vaapiContext.vaDisplay, _vaapiContext.vaImage.buf, &buf_info);
EGLint attribs[] = {
EGL_WIDTH, _vaapiContext.vaImage.width,
EGL_HEIGHT, _vaapiContext.vaImage.height,
EGL_LINUX_DRM_FOURCC_EXT, fourcc_code('R', '8', ' ', ' '),
EGL_DMA_BUF_PLANE0_FD_EXT, buf_info.handle,
EGL_DMA_BUF_PLANE0_OFFSET_EXT, _vaapiContext.vaImage.offsets[0],
EGL_DMA_BUF_PLANE0_PITCH_EXT, _vaapiContext.vaImage.pitches[0],
EGL_NONE
};
EGLImageKHR eglImage = eglCreateImageKHR(_eglDisplay, EGL_NO_CONTEXT, EGL_LINUX_DMA_BUF_EXT, (EGLClientBuffer)NULL, attribs);
Looking at what could cause this error in the following document https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt, I also tried to add the following options which should not matter since my format is not planar
EGL_YUV_COLOR_SPACE_HINT_EXT, EGL_ITU_REC601_EXT,
EGL_SAMPLE_RANGE_HINT_EXT, EGL_YUV_FULL_RANGE_EXT,
EGL_YUV_CHROMA_HORIZONTAL_SITING_HINT_EXT, EGL_YUV_CHROMA_SITING_0_EXT,
EGL_YUV_CHROMA_VERTICAL_SITING_HINT_EXT, EGL_YUV_CHROMA_SITING_0_EXT
The code that I'm using is similar to all the examples I've seen so I'm not sure what the error is.
Note that I have removed all the error checks for this post. All the calls above are successful except for eglCreateImageKHR.

After turning the egl log level to debug, I was able to get more information about the error and pinpointed where in the egl source code this error happened. It turns out that the format fourcc_code('R', '8', ' ', ' ') was not supported because my mesa version was too old. You need to have mesa 11.0.0 or above installed. After recompiling mesa (I'm running Ubuntu 15.04) and installing the 11.0.0 version, I'm finally getting an EGL image.

Related

Alternative for Deprecated NSKeyedUnarchiver.UnarchiveTopLevelObject in Xamarin IOS

I'm creating a .Net MAUI app, and since I need to render higher number of points to the UI, I'm going platform native to render.
I'm new to IOS, and for performance efficient renderings I have referred the sample given in this blog, where I'm decoding the existing layer and copying to a new CAShapeLayer. But I'm facing deprecated warning for these below changes.
var newDrawingLayer = NSKeyedUnarchiver.UnarchiveTopLevelObject(
data: NSKeyedArchiver.GetArchivedData(drawingLayer, false, out error), error: out error) as CAShapeLayer;
What would be the alternative to achieve this in Xamarin IOS?.
Finally I have solved myself. Instead using deprecated UnarchiveTopLevelObject we can use GetUnArchiveObject and additional parameter need to be given which represents type of object to get. Here I'm getting from CAShapeLyer, you can give as object.Class(here it is drawingLayer.Class). The following code solves my problem.
var newDrawingLayer = NSKeyedUnarchiver.GetUnArchiveObject(drawingLayer.Class,
data: NSKeyedArchiver.GetArchivedData(drawingLayer, false, out error), error: out error) as CAShapeLayer;

Error setting scrollFactor on FlxSprite after update to HaxeFlixel 3.3.0

I just finished updating my HaxeFlixel install to 3.3.0 and after ironing out all the other upgrade changes I am still getting one error I can't find any explanation for. I am setting the scrollFactor property on the FlxSprites that make up my background elements, and had no problem with it before 3.3.0. I can't seem to find any references to that property changing with the update.
Here is the relevant code where I am setting the property:
//Setup bg
var bg:FlxSprite;
var scrollFactor:FlxPoint;
for (i in 0...loader.bgArray.length){
bg = new FlxSprite(0, 0, loader.bgArray[i][0]);
scrollFactor = new FlxPoint(
Std.parseFloat(loader.bgArray[i][1]),
Std.parseFloat(loader.bgArray[i][2]));
bg.scrollFactor = scrollFactor;
add(bg);
}
Here is my output from haxelib list:
flixel: [3.3.0] hxcpp: [3.1.30] lime-tools: [1.4.0] lime:
[0.9.7] openfl-html5: [1.4.0-beta] openfl-native: [1.4.0]
openfl-samples: [1.3.0] openfl: [1.4.0]
When I run lime test flash in my project folder with the above snippet I get:
source/PlayState.hx:54: characters 3-33 : Cannot access field or
identifier scrollFactor for writing
Line 54 is the one where I am setting bg.scrollFactor.
I'm not sure about notices about this update, but indeed the current situation is that scrollFactor accessors are (default, null), so there is no chance you could set it up like that.
It also isn't even the most proper way to do that, since in HaxeFlixel FlxPoints could and mostly should be pooled, so you would usually use not new FlxPoint(x, y), but FlxPoint.get(x, y) which will make your code run much faster.
Anyhow, down to your current situation, just use
bg.scrollFactor.set(
Std.parseFloat(loader.bgArray[i][1]),
Std.parseFloat(loader.bgArray[i][2])
);
instead of
scrollFactor = new FlxPoint(
Std.parseFloat(loader.bgArray[i][1]),
Std.parseFloat(loader.bgArray[i][2])
);
bg.scrollFactor = scrollFactor;
and it will work perfectly (and faster).

Continuation frame cannot follow current opcode

I'm using ws in a Node websocket server.
In production, I frequently get this error:
Error: continuation frame cannot follow current opcode
What is causing this?
How should go about debugging and replicating this error in a development environment?
EDIT:
Doesn't seem to be specific to a browser, I've captured these errors in connections from Chrome, Firefox and IE10 and from different operating systems.
EDIT 2:
Error is thrown here. Apparently after receiving a frame with opcode 0 after a frame with a code != 1 && != 2.
EDIT 3:
RFC6455, section 5.2, shows what the opcodes mean and the frame's anatomy.
You might run Autobahn Testsuite (in fuzzing client mode) against your server. This will give you a detailed report like this (including wirelogs) of issues encountered.
Disclosure: I am original author of Autobahn and work for Tavendo.
For a continuation frame to work the frame before it needs to be a continuation frame or an initial frame of 1/text or 2/binary. So a frame that is not a continuation, text or binary frame is being sent. Or a new text or binary frame is being sent before it should.
To debug you need to analyze the code on the client side and check the frames on the sever side to figure out why it's sending frames out of order.
I started seeing this error, and it was caused by this code in my server.js:
wss.on('connection', function (client, request) {
wsg = client;
client._socket.setEncoding('utf8'); // <== oops, don't do this
// ...
}

Is Xinerama causing issues with my code?

After trying to get a basic "Hello World"-like XServer application up and running, I've found that, no matter what I try, I keep running into the same error, which is:
X Error of Failed request: BadMAtch( invalid parameter attributes )
Major opcode of failed request: 78 ( X_CreateColormap )
At first, I thought it was my drivers, so I updated them to 290.10 (nVidia).
My (relevant) setup consists of the following:
nVidia GTX 550 Ti
Sabayon Linux
Kernel 3.2
But, after some surfing, it seems like it could be either the fact that I have dual monitors (one connected to an hdmi-mini port, the other vga/dvi), or the fact that I have Xinerama enabled, or both.
I tried to compensate for both monitors by creating two GLXContext objects in my code, which, as expected, didn't do anything (be nice: I just started learning this API). I did this because it seemed like a GLXContext had something to do with monitor input/output.
After that didn't work, I tried glXGetConfig, and that didn't work either. So, I started looking around more and found a post on a forum (written a couple years ago) about someone having issues with disabling Xinerama, and that that was causing the issue. The weird thing was that this was posted back in 2009, so one would think that nVidia had fixed this by now.
I'm at a loss as to what to do, and I believe that I'm kind of screwed some how unless I can fix this.
Anyone can view my code here (pastebin), along with my post on SuperUser here.
I could really use some help on this one.
tl;dr
setWindowAtt.colormap = colorMap;
setWindowAtt.event_mask = ExposureMask | KeyPressMask;
win = XCreateWindow( dp, root, 0, 0, 600, 600, 0, visualInfo->depth, InputOutput, visualInfo->visual, CWColormap | CWEventMask, &setWindowAtt );
XMapWindow( dp, win );
XStoreName( dp, win, VI_UN_DEF_WIN_NAME );
glxContext = glXCreateContext( dp, visualInfo, NULL, GL_TRUE ); //error
glXMakeCurrent( dp, win, glxContext );
The clue is right in front of you:
Major opcode of failed request: 78 ( X_CreateColormap )
That means XCreateColormap failed. If it wanted to tell you a GLX command had failed, it would have said something about GLX instead.
After reading the rest of your code: the visual you're getting from glXChooseVisual is probably a TrueColor-class visual, and - as the manual for XCreateColormap says - TrueColor visuals must be allocated with AllocNone, and will throw BadMatch if you don't.

How to get webcam video stream bytes in c++

I am targeting windows machines. I need to get access to the pointer to the byte array describing the individual streaming frames from an attached usb webcam. I saw the playcap directshow sample from the windows sdk, but I dont see how to get to raw data, frankly, I don't understand how the video actually gets to the window. Since I don't really need anything other than the video capture I would prefer not to use opencv.
Visual Studio 2008 c++
Insert the sample grabber filter. Connect the camera source to the sample grabber and then to the null renderer. The sample grabber is a transform, so you need to feed the output somewhere, but if you don't need to render it, the null renderer is a good choice.
You can configure the sample grabber using ISampleGrabber. You can arrange a callback to your app for each frame, giving you either a pointer to the bits themselves, or a pointer to the IMediaSample object which will also give you the metadata.
You need to implement ISampleGrabberCB on your object, and then you need something like this (pseudo code)
IFilterInfoPtr m_pFilterInfo;
ISampleGrabberPtr m_pGrabber;
m_pGrabber = pFilter;
m_pGrabber->SetBufferSamples(false);
m_pGrabber->SetOneShot(false);
// force to 24-bit mode
AM_MEDIA_TYPE mt;
ZeroMemory(&mt, sizeof(mt));
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_RGB24;
m_pGrabber->SetMediaType(&mt);
m_pGrabber->SetCallback(this, 0);
// SetCallback increments a refcount on ourselves,
// but we own the grabber so this is recursive
/// -- must addref before SetCallback(NULL)
Release();

Resources