Does anyone know why QRectF right()/bottom() methods behaves differently than QRect ?
The following example
QRectF r1(0, 0, 20, 20);
qDebug() << r1.right();
QRect r2(0, 0, 20, 20);
qDebug() << r2.right();
returns this result:
20
19
When I tried to measure width(), both returns 20
QRectF r1(0, 0, 20, 20);
qDebug() << r1.width();
QRect r2(0, 0, 20, 20);
qDebug() << r2.width();
20
20
I tried to check documentation but I didn't find any mention of these differences.
What is the reason for this? And how to use QRectF in the case of QGraphicsItem when drawing 1-pixel width line without antialiasing? Should I always adjust boundingRect().adjust(0,0,-1,-1) ?
A bit late to the party. But today I stumbled accross the same issue. A look into the Qt docs revealed that all methods for access of bottom and right bear some "historical burden" (e.g. QRect::bottom()):
Note that for historical reasons this function returns top() + height() - 1; use y() + height() to retrieve the true y-coordinate.
See also Qt-Docs
I have a rather simple question, but after searching quite some time I found no real answer yet. Microsoft suggests to enable AVX enhanced instruction set in order to also make use of SSE4 optimized code.
Unfortunately, despite some readings, this enforces also use of an AVX capable CPU. Is there a way known to enable SSE4 without enforcing AVX in VC2013?
Background of this question is obvious I think, SSE4 is longer supported and only requires older CPU's (first from 2006 I think), while AVX requires CPU's from 2011. The dll in question only uses optimizations for SSE4, but for now I have to stick with SSE2 sacrificing performance in order to keep it working.
It seems that /arch:SSE2 flag adds support for SSE2 and later intrinsics. I don't have Visual Studio installed but this example works(_mm_floor_ps is SSE4 specific) :
#include <smmintrin.h>
#include <iostream>
using namespace std;
int main()
{
__declspec(align(16)) float values[4] = {1.3f, 2.1f, 4.3f, 5.1f};
for(int i = 0; i < 4; i++)
cout << values[i] << ' ';
cout << endl;
__m128 x = _mm_load_ps(values);
x = _mm_floor_ps(x);
_mm_store_ps(values, x);
for(int i = 0; i < 4; i++)
cout << values[i] << ' ';
cout << endl;
}
You can try it online here.
I am implementing the simple "for clause" using open mp in visual studio 2012.
This implementation file is .cu file to be compiled by nvcc.
When using omp, the "for clause" gets fast, but other parts get slow.
I can't find the answer although surfing the many related questions.
Code follows.
void Test()
{
unsigned char* pbDest = (unsigned char*)malloc(1000000);
unsigned char* pbSrc = (unsigned char*)malloc(3000000);
#pragma omp parallel for shared(pbDest, pbSrc)
for (int i = 0; i < 1000000; i ++)
{
pbDest[i] = (unsigned char)((299 * pbSrc[3 * i] + 587 * pbSrc[3 * i + 1] + 114 * pbSrc[3 * i + 2]) / 1000);
}
...//other part
free(pbDest);
free(pbSrc);
}
This "Test" function executes in 100ms without omp, but with it, it executes in 120ms.
So, i doubted the for clause using omp, but it's optimized correctly when use omp, from 50ms to 20ms
What is the problem.
I will be appreciated if you help me.
This is my first post I hope I am not making any mistake.
I have the following code. I am trying to allocate and access a two dimensional array in one shot and more importantly in one byte array. I also need to be able to access each sub array individually as shown in the code. It works fine in the debug mode. Though in the release build in VS 2012, it causes some problems during runtime, when the compiler optimizations are applied. If I disable the release compiler optimizations then it works. Do I need to do some kind of special cast to inform the compiler?
My priorities in code is fast allocation and network communication of complete array and at the same time working with its sub arrays.
I prefer not to use boost.
Thanks a lot :)
void PrintBytes(char* x,byte* data,int length)
{
using namespace std;
cout<<x<<endl;
for( int i = 0; i < length; i++ )
{
std::cout << "0x" << std::setbase(16) << std::setw(2) << std::setfill('0');
std::cout << static_cast<unsigned int>( data[ i ] ) << " ";
}
std::cout << std::dec;
cout<<endl;
}
byte* set = new byte[SET_SIZE*input_size];
for (int i=0;i<SET_SIZE;i++)
{
sprintf((char*)&set[i*input_size], "M%06d", i+1);
}
PrintByte((byte*)&set[i*input_size]);
I have multiple "desktops" that I switch between for different tasks in my KDE Linux environment. How can I automagically determine which desktop my Konsole ( kde console) window is being displayed in?
EDIT:
I'm using KDE 3.4 in a corporate environment
This is programming related. I need to programatically (a.k.a. automagically ) determine which desktop a user is on and then interact with X windows in that desktop from a python script.
Should I go around and nuke all Microsoft IDE questions as not programming related? How about Win32 "programming" questions? Should I try to close those too?
Actually EWMH _NET_CURRENT_DESKTOP gives you which is the current desktop for X, not relative to the application. Here's a C snippet to get the _WM_DESKTOP of an application. If run from the KDE Konsole in question it will give you what desktop it is on, even it is not the active desktop or not in focus.
#include <X11/Xlib.h>
#include <X11/Shell.h>
...
Atom net_wm_desktop = 0;
long desktop;
Status ret;
/* see if we've got a desktop atom */
Atom net_wm_desktop = XInternAtom( display, "_NET_WM_DESKTOP", False);
if( net_wm_desktop == None ) {
return;
}
/* find out what desktop we're currently on */
if ( XGetWindowProperty(display, window, net_wm_desktop, 0, 1,
False, XA_CARDINAL, (Atom *) &type_ret, &fmt_ret,
&nitems_ret, &bytes_after_ret,
(unsigned char**)&data) != Success || data == NULL
) {
fprintf(stderr, "XGetWindowProperty() failed");
if ( data == NULL ) {
fprintf(stderr, "No data returned from XGetWindowProperty()" );
}
return;
}
desktop = *data;
XFree(data);
and desktop should be the index of the virtual desktop the Konsole is currently in. That is not the same which head of a multi-headed display. If you want to determine which head, you need to use XineramaQueryScreens (Xinerama extension, not sure if there is a XRandR equivalent or not. Does not work for nVidia's TwinView.
Here's an excerpt from some code I wrote, that given a x and y, calculate the screen boundaries (sx, sy, and sw with screen width and sh for screen height). You can easily adapt it to simply return which "screen" or head x and y are on. (Screen has a special meaning in X11).
#include <X11/X.h>
#include <X11/extensions/Xinerama.h>
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
Bool xy2bounds(Display* d,int x, int y, int* sx, int* sy, int* sw, int* sh) {
*sx = *sy = *sw = *sh = -1; /* Set to invalid, for error condition */
XineramaScreenInfo *XinInfo;
int xin_screens = -1;
int i;
int x_origin, y_origin, width, height;
Bool found = False;
if ( d == NULL )
return False;
if ( (x < 0) || (y < 0) )
return False;
if ( True == XineramaIsActive(d) ) {
XinInfo = XineramaQueryScreens( d, &xin_screens );
if ( (NULL == XinInfo) || (0 == xin_screens) ) {
return False;
}
} else {
/* Xinerama is not active, so return usual width/height values */
*sx = 0;
*sy = 0;
*sw = DisplayWidth( d, XDefaultScreen(d) );
*sh = DisplayHeight( d, XDefaultScreen(d) );
return True;
}
for ( i = 0; i < xin_screens; i++ ) {
x_origin = XinInfo[i].x_org;
y_origin = XinInfo[i].y_org;
width = XinInfo[i].width;
height = XinInfo[i].height;
printf("Screens: (%d) %dx%d - %dx%d\n", i,
x_origin, y_origin, width, height );
if ( (x >= x_origin) && (y >= y_origin) ) {
if ( (x <= x_origin+width) && (y <= y_origin+height) ) {
printf("Found Screen[%d] %dx%d - %dx%d\n",
i, x_origin, y_origin, width, height );
*sx = x_origin;
*sy = y_origin;
*sw = width;
*sh = height;
found = True;
break;
}
}
}
assert( found == True );
return found;
}
Referring to the accepted answer.... dcop is now out of date; instead of dcop you might want to use dbus (qdbus is a command line tool for dbus).
qdbus org.kde.kwin /KWin currentDesktop
The KDE window manager, as well as GNOME and all WMs that follow the freedesktop standards support the Extended Window Manager Hints (EWMH).
These hints allow developers to access programmatically several window manager functions like maximize, minimize, set window title, virtual desktop e.t.c
I have never worked with KDE but Gnome provides such functionality so I assume that KDE has it too.
It is also possible to access a subset of these hints with pure Xlib functions. This subset are ICCCM hints. If memory serves me correct virtual desktop access is only in EWMH.
Update: Found it! (_NET_CURRENT_DESKTOP)
With dcop, the kde Desktop COmmunication Protocol, you could easily get current desktop executing
dcop kwin KWinInterface currentDesktop
command. If you are working with new kde 4.x dcop is no more used, and you can translate the command to a DBUS call. It should be quite simple to send/get dbous messages with python apis.
Sorry for my bad english,
Emilio
A new answer because MOST of the answers here get the current desktop, not the one the terminal is in (Will break if user changes desktop while script is running).
xprop -id $WINDOWID | sed -rn -e 's/_NET_WM_DESKTOP\(CARDINAL\) = ([^)]+)/\1/pg'
I tested this in a loop while changing desktops, it works ok (test script bellow, you have to check the output manually after the run).
while true
do
xprop -id $WINDOWID | sed -rn -e 's/_NET_WM_DESKTOP\(CARDINAL\) = ([^)]+)/\1/pg'
sleep 1
done
Thanks for the other answers and comments, for getting me half way there.
I was looking for the same thing, but with one more restriction, I don't want to run a shell command to achieve the result. Starting from Kimball Robinson answer, this is what I got.
Tested working in Python3.7, Debian 10.3, KDE Plasma 5.14.5.
import dbus
bus = dbus.SessionBus()
proxy = bus.get_object("org.kde.KWin", "/KWin")
int( proxy.currentDesktop(dbus_interface="org.kde.KWin") )