After trying to get a basic "Hello World"-like XServer application up and running, I've found that, no matter what I try, I keep running into the same error, which is:
X Error of Failed request: BadMAtch( invalid parameter attributes )
Major opcode of failed request: 78 ( X_CreateColormap )
At first, I thought it was my drivers, so I updated them to 290.10 (nVidia).
My (relevant) setup consists of the following:
nVidia GTX 550 Ti
Sabayon Linux
Kernel 3.2
But, after some surfing, it seems like it could be either the fact that I have dual monitors (one connected to an hdmi-mini port, the other vga/dvi), or the fact that I have Xinerama enabled, or both.
I tried to compensate for both monitors by creating two GLXContext objects in my code, which, as expected, didn't do anything (be nice: I just started learning this API). I did this because it seemed like a GLXContext had something to do with monitor input/output.
After that didn't work, I tried glXGetConfig, and that didn't work either. So, I started looking around more and found a post on a forum (written a couple years ago) about someone having issues with disabling Xinerama, and that that was causing the issue. The weird thing was that this was posted back in 2009, so one would think that nVidia had fixed this by now.
I'm at a loss as to what to do, and I believe that I'm kind of screwed some how unless I can fix this.
Anyone can view my code here (pastebin), along with my post on SuperUser here.
I could really use some help on this one.
tl;dr
setWindowAtt.colormap = colorMap;
setWindowAtt.event_mask = ExposureMask | KeyPressMask;
win = XCreateWindow( dp, root, 0, 0, 600, 600, 0, visualInfo->depth, InputOutput, visualInfo->visual, CWColormap | CWEventMask, &setWindowAtt );
XMapWindow( dp, win );
XStoreName( dp, win, VI_UN_DEF_WIN_NAME );
glxContext = glXCreateContext( dp, visualInfo, NULL, GL_TRUE ); //error
glXMakeCurrent( dp, win, glxContext );
The clue is right in front of you:
Major opcode of failed request: 78 ( X_CreateColormap )
That means XCreateColormap failed. If it wanted to tell you a GLX command had failed, it would have said something about GLX instead.
After reading the rest of your code: the visual you're getting from glXChooseVisual is probably a TrueColor-class visual, and - as the manual for XCreateColormap says - TrueColor visuals must be allocated with AllocNone, and will throw BadMatch if you don't.
Related
I am trying to make myself Digital audio to Analog audio converter
I have STM32F769i Discovery0: https://www.st.com/en/evaluation-tools/32f769idiscovery.html
Which has SPDIFRX and SPDIFTX ports
I found a fearly good starting point here: http://www.tjaekel.com/DiscoveryF7Audio/index.html
Also the guy posted here: https://www.openstm32.org/forumthread921
But the guy used STM32746G Discovery: https://www.st.com/en/evaluation-tools/32f746gdiscovery.html
Instead
So I went and tried to just get his SPDIF audio portion working on my board
My Project can be found here (I hope it compiles, with CubeIDE you never know what will happen :)): https://www.mediafire.com/file/n0s2z9p6nn735qg/SPDIF_Example.zip/file
I have no idea what I am doing wrong, but for some reason SPDIF_RX_IRQHandler (in stm32f7xx_it.c) is never called (LED never turns green, yea my debugging tehniques are primitive, but will explain why later)
So because of that HAL_SPDIFRX_ReceiveDataFlow_IT (in spdifrx.c) always returns HAL_TIMEOUNT, and of course no audio is ever played on the speakers
I am not sure what I am doing wrong
When I start MCU I call BSP_SPDIF_Init() (defined in spdifrx.c) in main.c right after I take care of the clock
if (BSP_SPDIFRX_Init() != HAL_OK)
{
Error_Handler();
}
And it appears it initializes all right, because I get HAL_OK back
Maybe I am not inializing GPIO properly from HAL_MspInit in stm32f7xx_hal_msp.c inproperly
I am realy out of ideas, what I am doing wrong, because the analog side of the audio does init, I can hear that as pop pop from the speakers when I power up my MCU, its just that SPDIF side has problems
I am is my setup crocked?
I am using this component radio as my SPDIF transmitter (Hama DIT2000M): https://de.hama.com/webresources/article-documents/00054/man/00054821man_en.pdf
It says it has SPDIF Audio out (it says its digital over coaxial)
I know its optical side is working fine because on my component receiver it plays just fine (it reports as 48khz Stereo)
Is my cable to long? I am using this cable: https://i.imgur.com/JqAxePF.jpg
(not sure who made it)
Now why do I debug with blinking leds, because where my test subject is (my Hama receiver), there is no computer so…. Blinking leds it is, I would like to avoid aditional libraries and have a minimum working example, because you never know what problems they could bring so that's why LCD is not used right now
I hope someone has any advice, eather how to get any data in to SPDIF port (because right now for some reason, I don't get anything) or what I am doing wrong for my audio not to play, the usage of STM32F769i Discovery0 instead of STM32746G Discovery is probably not responsible
I hope that this is a proper place for this king of questions, because I did ask a question regarding SPDIF on the STM forum: https://community.st.com/s/feed/0D53W00001z0RaqSAE
But didn't get any usefull advice there
Now SPDIF realy does not have much examples, there is only a polling example which does work (with the same cable), there is no interupt example, my interupt example (you can read the post on the STM forum post I linked) is not working as well (interupts are probably not broken right?)
So yea, I am lost a bit not sure what to do, and who to ask, so I tried here
PS: I know stackvoverflow does not like links to code, but I believe something is wrong with my project (interupts don't fire for some reason), and its realy hard to put this all into the question
Thanks for Anwsering and Best Regards
I managed to solve this, I guess I did not initialize SPDIF GPIO properly
after setting this
GPIO_InitStructure.Pin = GPIO_PIN_7;
GPIO_InitStructure.Mode = GPIO_MODE_AF_PP;
GPIO_InitStructure.Pull = GPIO_NOPULL;
GPIO_InitStructure.Speed = GPIO_SPEED_FAST;
GPIO_InitStructure.Alternate = DISCOVERY_SPDIFRX_AF;
HAL_GPIO_Init(GPIOD, &GPIO_InitStructure);
to this
GPIO_InitStructure.Pin = GPIO_PIN_12;
GPIO_InitStructure.Mode = GPIO_MODE_AF_PP;
GPIO_InitStructure.Pull = GPIO_NOPULL;
GPIO_InitStructure.Speed = GPIO_SPEED_FAST;
GPIO_InitStructure.Alternate = GPIO_AF7_SPDIFRX;
HAL_GPIO_Init(GPIOG, &GPIO_InitStructure);
interupts started to fire
I have implemented hardware decoding on Linux using VAAPI via FFmpeg. Since I have an OpenGL application, I am converting the decoded VAAPI surfaces to OpenGL textures using vaCopySurfaceGLX. This is working fine except that there is a copy (on the GPU) that is made. I was told that I could directly use the VAAPI surface as OpenGL textures using EGL. I have looked at some examples (mainly Kodi source code) but I'm not able to create the EGLImageKHR. The function eglCreateImageKHR returns 0, and when I check for errors, I get a EGL_BAD_ATTRIBUTE error but I don't understand why.
Below is how I'm converting the VAAPI surface.
During initialization, I set up EGL this way:
// currentDisplay comes from call to glXGetCurrentDisplay() and is also used when getting the VADisplay like this: vaGetDisplay(currentDisplay)
EGLint major, minor;
_eglDisplay = eglGetDisplay(currentDisplay);
eglInitialize(_eglDisplay, &major, &minor);
eglBindAPI(EGL_OPENGL_API);
Then later, to create my EGL image, this is what I do:
// _vaapiContext.vaDisplay comes from vaGetDisplay(currentDisplay)
// surface is the VASurfaceID of the surface I want to use in OpenGL
vaDeriveImage(_vaapiContext.vaDisplay, surface, &_vaapiContext.vaImage);
VABufferInfo buf_info;
memset(&buf_info, 0, sizeof(buf_info));
buf_info.mem_type = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME;
vaAcquireBufferHandle(_vaapiContext.vaDisplay, _vaapiContext.vaImage.buf, &buf_info);
EGLint attribs[] = {
EGL_WIDTH, _vaapiContext.vaImage.width,
EGL_HEIGHT, _vaapiContext.vaImage.height,
EGL_LINUX_DRM_FOURCC_EXT, fourcc_code('R', '8', ' ', ' '),
EGL_DMA_BUF_PLANE0_FD_EXT, buf_info.handle,
EGL_DMA_BUF_PLANE0_OFFSET_EXT, _vaapiContext.vaImage.offsets[0],
EGL_DMA_BUF_PLANE0_PITCH_EXT, _vaapiContext.vaImage.pitches[0],
EGL_NONE
};
EGLImageKHR eglImage = eglCreateImageKHR(_eglDisplay, EGL_NO_CONTEXT, EGL_LINUX_DMA_BUF_EXT, (EGLClientBuffer)NULL, attribs);
Looking at what could cause this error in the following document https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt, I also tried to add the following options which should not matter since my format is not planar
EGL_YUV_COLOR_SPACE_HINT_EXT, EGL_ITU_REC601_EXT,
EGL_SAMPLE_RANGE_HINT_EXT, EGL_YUV_FULL_RANGE_EXT,
EGL_YUV_CHROMA_HORIZONTAL_SITING_HINT_EXT, EGL_YUV_CHROMA_SITING_0_EXT,
EGL_YUV_CHROMA_VERTICAL_SITING_HINT_EXT, EGL_YUV_CHROMA_SITING_0_EXT
The code that I'm using is similar to all the examples I've seen so I'm not sure what the error is.
Note that I have removed all the error checks for this post. All the calls above are successful except for eglCreateImageKHR.
After turning the egl log level to debug, I was able to get more information about the error and pinpointed where in the egl source code this error happened. It turns out that the format fourcc_code('R', '8', ' ', ' ') was not supported because my mesa version was too old. You need to have mesa 11.0.0 or above installed. After recompiling mesa (I'm running Ubuntu 15.04) and installing the 11.0.0 version, I'm finally getting an EGL image.
When calling IDirect3D9::CreateDevice with the BehaviorFlags D3DCREATE_ADAPTERGROUP_DEVICE in order to create a fullscreen multihead device (with 2 or more monitors attached) the function returns D3DERR_INVALIDCALL, when running the application on Windows 10 (build 1511, or build 10240).
The same code works fine on Windows 7 (on a multitude of different machines), and also on Windows 8.1 (with the latest updates). Also creating individual D3D9 devices (fullscreen) for each attached monitor to the graphics adapter works fine on Windows 10.
D3D9Ex by the way shows exactly the same behavior. Can anyone point me to a working D3D9 multihead example that works on Windows 10? Thanks!
I observed the exact same behaviour with "CreateDevice".
But when you use "CreateDeviceEx" it works ... well ... almost :-(.
You may now create the device and use it, but under some circumstances (especially if you use the same resolutions as the desktop already had) you wont see anything and "Present" will continuously return "S_PRESENT_MODE_CHANGED". But if you now switch the second monitor to some other resolution via ResetEx end then switch back to desktop resolution - voila it works. I put that on on a key I can press after initialization:
const int idx = 1;
int OldWidth = D3DPresPar[idx].BackBufferWidth;
int OldHeight = D3DPresPar[idx].BackBufferHeight;
D3DPresPar[idx].BackBufferWidth = 1280;
D3DPresPar[idx].BackBufferHeight = 720;
D3DDispMode[idx].Width = 1280;
D3DDispMode[idx].Height = 720;
FailCheck(pD3DDevice->ResetEx(D3DPresPar, D3DDispMode), "ResetEX");
D3DPresPar[idx].BackBufferWidth = OldWidth;
D3DPresPar[idx].BackBufferHeight = OldHeight;
D3DDispMode[idx].Width = OldWidth;
D3DDispMode[idx].Height = OldHeight;
FailCheck(pD3DDevice->ResetEx(D3DPresPar, D3DDispMode), "ResetEX");
And after pressing the key it suddenly works. Weird, eh?
I confirmed this behaviour on multiple computers with nvidia, amd and intel grafics adapters. So the bug seems to be on microsoft side.
Conclusion: Theoretically it should work but there is some annyoing bug in windows 10 multihead initialization.
With some weird tricks you can achieve what you want, but these tricks are just too weird to use in production.
I just finished updating my HaxeFlixel install to 3.3.0 and after ironing out all the other upgrade changes I am still getting one error I can't find any explanation for. I am setting the scrollFactor property on the FlxSprites that make up my background elements, and had no problem with it before 3.3.0. I can't seem to find any references to that property changing with the update.
Here is the relevant code where I am setting the property:
//Setup bg
var bg:FlxSprite;
var scrollFactor:FlxPoint;
for (i in 0...loader.bgArray.length){
bg = new FlxSprite(0, 0, loader.bgArray[i][0]);
scrollFactor = new FlxPoint(
Std.parseFloat(loader.bgArray[i][1]),
Std.parseFloat(loader.bgArray[i][2]));
bg.scrollFactor = scrollFactor;
add(bg);
}
Here is my output from haxelib list:
flixel: [3.3.0] hxcpp: [3.1.30] lime-tools: [1.4.0] lime:
[0.9.7] openfl-html5: [1.4.0-beta] openfl-native: [1.4.0]
openfl-samples: [1.3.0] openfl: [1.4.0]
When I run lime test flash in my project folder with the above snippet I get:
source/PlayState.hx:54: characters 3-33 : Cannot access field or
identifier scrollFactor for writing
Line 54 is the one where I am setting bg.scrollFactor.
I'm not sure about notices about this update, but indeed the current situation is that scrollFactor accessors are (default, null), so there is no chance you could set it up like that.
It also isn't even the most proper way to do that, since in HaxeFlixel FlxPoints could and mostly should be pooled, so you would usually use not new FlxPoint(x, y), but FlxPoint.get(x, y) which will make your code run much faster.
Anyhow, down to your current situation, just use
bg.scrollFactor.set(
Std.parseFloat(loader.bgArray[i][1]),
Std.parseFloat(loader.bgArray[i][2])
);
instead of
scrollFactor = new FlxPoint(
Std.parseFloat(loader.bgArray[i][1]),
Std.parseFloat(loader.bgArray[i][2])
);
bg.scrollFactor = scrollFactor;
and it will work perfectly (and faster).
This is in response to dan's (dan^spotify on IRC) offer to take a look at my testcase, but I post it here in case anyone has encountered similar issues.
I'm experiencing a problem with libspotify where the application crashes (memory access violation) in both of these two scenarios:
the first sp_session_process_events (triggered by notify main thread callback) that's called after the sp_session_logout() function is called crashes the application
skipping logout and calling sp_session_release() crashes the application
I've applied sufficient synchronization from the session callbacks, and I'm otherwise operating on a single thread.
I've made a small testcase that does the following:
Creates session
Logs in
Waits 10 seconds
Attempts to logout, upon which it crashes (when calling sp_session_process_events())
If it were successful in logging out (which it isn't), would call sp_session_release()
I made a Gist for the testcase. It can be found here: https://gist.github.com/4496396
The test case is made using Qt (which is what I'm using for my project), so you'd need Qt 5 to compile it. I've also only written it with Windows and Linux in mind (don't have Mac). Assuming you have Qt 5 and Qt Creator installed, the instructions are as follows:
Download the gist
Copy the libspotify folder into the same folder as the .pro file
Copy your appkey.c file into the same folder
Edit main.cpp to login with your username and password
Edit line 38-39 in sessiontest.cpp and set the cache and settings path to your liking
Open up the .pro file and run from Qt Creator
I'd be very grateful if someone could tell me what I'm doing wrong, as I've spent so many hours trying anything I could think of or just staring at it, and I fear I've gone blind to my own mistakes by now.
I've tested it on both Windows 7 and Linux Ubuntu 12.10, and I've found some difference in behavior:
On Windows, the testcase crashes invariably regardless of settings and cache paths.
On Linux, if setting settings and cache to "" (empty string), logging out and releasing the session works fine.
On Linux, if paths are anything else, the first run (when folder does not already exist) logs out and releases session as it should, but on the next run (when folder already exists), it crashes in the exact same way as it does on Windows.
Also, I can report that sp_session_flush_caches() does not cause a crash.
EDIT: Also, hugo___ on IRC was kind enough to test it on OSX for me. He reported no crashes despite running the application several times in a row.
While you very well may be looking at a bug in libspotify, I'd like to point out a possibly redundant call to sp_session_process_events(), from what I gathered from looking at your code.
void SessionTest::processSpotifyEvents()
{
if (m_session == 0)
{
qDebug() << "Process: No session.";
return;
}
int interval = 0;
sp_session_process_events(m_session, &interval);
qDebug() << interval;
m_timerId = startTimer(interval);
}
It seems this code will pickup the interval value and start a timer on that to trigger a subsequent call to event(). However, this code will also call startTimer when interval is 0, which is strictly not necessary, or rather means that the app can go about doing other stuff until it gets a notify_main_thread callback. The docs on startTimer says "If interval is 0, then the timer event occurs once every time there are no more window system events to process.". I'm not sure what that means exactly but it seems like it can produce at least one redundant call to sp_session_process_events().
http://qt-project.org/doc/qt-4.8/qobject.html#startTimer
I notice that you will get a crash on sp_session_release if you have a track playing when you call it.
I have been chasing this issue today. Login/logout works just fine on Mac, but the issue was 100% repeatable as you described on Windows.
By registering empty callbacks for offline_status_updated and credentials_blob_updated, the crash went away. That was a pretty unsatisfying fix, and I wonder if any libspotify developers want to comment on it.
Callbacks registered in my app are:
logged_in
logged_out
notify_main_thread
log_message
offline_status_updated
credentials_blob_updated
I should explicitly point out that I did not try this on the code you supplied. It would be interesting to know if adding those two extra callbacks works for you. Note that the functions I supply do absolutely nothing. They just have to be there and be registered when you create the session.
Adding the following call in your "logged in" libspotify callback seems to fix this crash as detailed in this SO post:
sp_session_playlistcontainer(session);