I'm using Godot_v3.2.1-stable_win64.exe(current Godot version) on my Windows 10. When running a project everything seems to work fine but when using the actual IDE of the Godot engine it seems like the IDE screen doesn't update fluidly on every mouse interaction but like only every 5 seconds instead (as like FPS would be low).
So mostly on hovering or clicking something it would light up or be triggered after you clicked on some other thing elsewhere (which is stupid of course).
This makes it even impossible to hit a button sometimes in the IDE.
E.g. when renaming a file a Window/Box for renaming pops up but you do not see it, because the IDE screen isn't updated. So if you don't blindly click on the box (which you don't see) the option for renaming is lost, because the box closes when clicking anywhere else. See what I mean?
Thank you for listening. Have a great day.
This is a known issue. Try updating your graphics driver to the latest version provided by Intel (not your OEM).
Good afternoon everyone.
*Note- I have looked for a couple of days for the answer to this question.
I have a bit of code that controls a separate piece of software to enter data.
I can not access the scripting side of this software, so I will just leave that bit out.
I have the following working.
Mouse movement.
Mouse Click.
Data input.
The only problem here is that while the script is running, I can still move the mouse manually. Which in turn can cause it to click on something wrong.
I know that
"BlockInput = True" deactivates mouse and keyboard.
This stops me from using the keyboard or it would work great.
"Application.Interactive = False" makes the cursor do the thinking thing.
This stops the program from clicking on the items I need to be clicked and still lets me use the mouse.
I hope this makes sense. This script will only be temporary until I get scripting access to the software.
Thank you in advance.
Some context
I've recently switched to ubuntu budgie (from unity), and I am really tired of the Plank/panel menu combo. I cannot find a setting that suits me, because depending on my screen setup, there's always something in the wrong place.
I am literally unable to show the menu on certain edges if I activate auto-hide, and if I don't activate it, it's not nice at all, to the point that I have removed the plank thing altogether. (Am I having strange bugs on this OS, or is it really messy?)
My idea
With great frustrations come new ideas. I thought again about one I had in the past. I would like to have a circle menu that pops around my mouse cursor when I press a given key combination (very much the kind of thing you would find in some games).
The main use case is to get "pined" application shortcuts easily when I need them, but perhaps other things would fit well with them (commands ...).
Questions
So my questions are:
Does such a thing already exist?
If it doesn't, is it difficult to realize? (How much time, complexity, ...)
What tools/libraries are needed for such a project? I know I'll find plenty of explanations on the gnome developer website but I could really use some more help.
Since you mention a buggy behaviour on Plank, depending on the screen configuration, I suspect you are suffering from this bug. In short: Plank's returned values for the space it needs are not always correct in multi monitor setup.
A neat option to replace at least part of the functionality is Ulauncher, by default called from a shortcut, but you could trigger it from anything that is capable of running its command.
Since Ulauncher's window simply identifies in the window list, you can easily write a script to move it to the current mouse position.
In case you'd need any help in that, just leave a comment.
Not sure if you are also referring to quick access of the window list, but for that you could use the Window Previews applet, or even the Workspace Overview applet, so life without Plank is possible.
The default behavior for right-clicking on most recent Linux distros is to select a menu item in a right-click menu upon releasing the right mouse button. While this saves some mouse presses, it is driving some of my Windows-trained (and rather vocal) coworkers completely bonkers, and a lot of searching has told me that there is no option to change this behavior in the distros they are using (mostly RHEL 6).
To make my work environment a little less volatile I would like to try to program a fix or patch for their systems to make right clicking work like they are used to (the menu does not even appear until the right mouse button is released), but I don't know what kinds of tutorials I should be looking for (shell scripts? C? etc.) in order to do this.
If I could be pointed in the right direction that would be lovely! (or if someone by chance already knows of a fix, that would work too, though a lot of Googling has told me that there does not appear to be one currently)
Follow the directions here:
https://unix.stackexchange.com/questions/20550/how-to-disable-the-forward-back-buttons-on-my-mouse
But instead of disabling the forward and back buttons, disable the right click mouse button. You can easily dump the resulting command into a shell script which calls xmodmap. Then you can make icons that disable and enable the right mouse button, for the times where they will need it.
We've got some in-house applications built in MFC, with OpenGL drawing routines. They all use the same code to draw on the screen and either print the screen or save it to a JPEG file. Everything's been working fine in Windows XP, and I need to find a way to make them work on Vista.
In three of our applications, everything works. In the remaining one, I can get the window border, title bar, menus, and task bar, but the interior never shows up. As I said, these applications use the exact same code to write to the screen and capture the window image, and the only difference I see that looks like it might be relevant is that the problem application uses the MFC multiple document interface, while the ones that work use the single document interface.
Either the answer isn't on the net, or I'm worse at Googling than I thought. I asked on the MSDN forums, and the only practical suggestion I got was to use GDI+ rather than GDI, and that did nothing different. I have tried different things with every part of the code that captures and prints or save, given a pointer to the window, so apparently it's a matter of the window itself. I haven't rebuilt the offending application using SDI yet, and I really don't have any other ideas.
Has anybody seen anything like this?
What I've got is four applications. They use a lot of common code, and share the actual .h and .cpp files, so I know the drawing and screen capture code is identical.
There is a WindowtoDIB() routine that takes a *pWnd, and a source rectangle and destination size. It looks like very slightly adapted Microsoft code, and I've found other functions in this file on the Microsoft website. Of my four applications, three handle this just fine, but one doesn't. The most obvious difference is that the problem one is MDI.
It looks to me like the *pWnd is the problem. I'm not a MFC guru by a long shot, and it seems to me that the problem may be that we've got one window setup in the SDIs, and more than one in the MDI. I may be passing the wrong *pWnd to the function.
In the meantime, it has started working properly on the 64-bit Vista test machine, although it still doesn't work on the 32-bit Vista machine. I have no idea why. I haven't changed anything since the last tests, and I didn't think anybody else had. (On the 32-bit version, the Print Screen key works as expected, but it does not save the screen as a JPEG.)
Your question title mentions screen capture but your actual question doesn't. Please elaborate more clearly. Is the problem that you can do screen capture of three of your applications, but not the fourth one? You can use different screen capture software that can capture OpenGL/DirectX windows. Those surfaces are handled directly by the Window Manager and won't show up with a simple 'PrtScn'.
Switching to GDI+ won't solve it, nor will switching to SDI.
If it's the content of the CView that you want, then yes, that should be right one. If it's the content of the whole screen (at least the content, without the toolbar(s) and status bar), then you should pass it the CMainFrame (that's the default name which may have been changed, the one that is derived from CMDIFrameWnd).
Can you post the code of WindowToDIB()? I've just tried it and It Works For Me (TM), but without OpenGL code in the view. Try passing the following windows to your WindowToDIB() function:
CMainFrame* mainfrm = static_cast<CMainFrame*>(::AfxGetMainWnd());
- mainfrm
- mainfrm->MDIGetActive()
- mainfrm->MDIGetActive()->GetActiveView()
and see what you get.
The contents of each window are directX surfaces and are only assembled by the window manager in the graphics card. You'd not be able to capture this unless you switch off the new interface (DWM) or code specifically for screen capture from the DWM.
Wikipedia has a good description of the Desktop Window Manager (DWM)
Sorry, I still don't understand. You're trying to get the Print Screen key to work on all four applications? Or you're trying to get the WindowtoDIB() function to work, which takes a 'screenshot' (from within your own application) of the application itself, so that it can be saved as an image file?
Also, what do you mean with 'he Print Screen key works as expected, but it does not save the screen as a JPEG.'? Print Screen only copies to the clipboard, what happens when you paste in Paint?
If your WindowtoDIB() function only 'captures' the window you pass to it, then yes, your MDI child windows are not going to show up.
We eventually solved this by creating a different OpenGL context, and drawing everything to that. We gave up on the screen capture.