bottom-right position of QRect vs QRectF - qgraphicsview

Does anyone know why QRectF right()/bottom() methods behaves differently than QRect ?
The following example
QRectF r1(0, 0, 20, 20);
qDebug() << r1.right();
QRect r2(0, 0, 20, 20);
qDebug() << r2.right();
returns this result:
20
19
When I tried to measure width(), both returns 20
QRectF r1(0, 0, 20, 20);
qDebug() << r1.width();
QRect r2(0, 0, 20, 20);
qDebug() << r2.width();
20
20
I tried to check documentation but I didn't find any mention of these differences.
What is the reason for this? And how to use QRectF in the case of QGraphicsItem when drawing 1-pixel width line without antialiasing? Should I always adjust boundingRect().adjust(0,0,-1,-1) ?

A bit late to the party. But today I stumbled accross the same issue. A look into the Qt docs revealed that all methods for access of bottom and right bear some "historical burden" (e.g. QRect::bottom()):
Note that for historical reasons this function returns top() + height() - 1; use y() + height() to retrieve the true y-coordinate.
See also Qt-Docs

Related

What may be wrong about my use of SetGraphicsRootDescriptorTable in D3D12?

For 7 meshes that I would like to draw, I load 7 textures and create the corresponding SRVs in a descriptor heap. Then there's another SRV for IMGUI. There are also 3 CBVs, for triple buffer usage. So it should be like: | srv x7 | srv x1 | cbv x3| in the heap.
The problem I met is that when I called SetGraphicsRootDescriptorTable on range 0, which should be an SRV (which is the texture actually), something went wrong. Here's the code:
ID3D12DescriptorHeap* ppHeaps[] = { pCbvSrvDescriptorHeap, pSamplerDescriptorHeap };
pCommandList->SetDescriptorHeaps(_countof(ppHeaps), ppHeaps);
pCommandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
pCommandList->IASetIndexBuffer(pIndexBufferViewDesc);
pCommandList->IASetVertexBuffers(0, 1, pVertexBufferViewDesc);
CD3DX12_GPU_DESCRIPTOR_HANDLE srvHandle(pCbvSrvDescriptorHeap->GetGPUDescriptorHandleForHeapStart(), indexMesh, cbvSrvDescriptorSize);
pCommandList->SetGraphicsRootDescriptorTable(0, srvHandle);
pCommandList->SetGraphicsRootDescriptorTable(1, pSamplerDescriptorHeap->GetGPUDescriptorHandleForHeapStart());
If indexMesh is 5, SetGraphicsRootDescriptorTable will cause the following error though the render output seems still good. And when indexMesh is 6, the following error will still occur and there will be another identical error except that the offset 8 turns into 9.
D3D12 ERROR: CGraphicsCommandList::SetGraphicsRootDescriptorTable: Specified GPU Descriptor Handle (ptr = 0x400750000002c0 at 8 offsetInDescriptorsFromDescriptorHeapStart) of type CBV, for Root Signature (0x0000020A516E8BF0:'m_rootSignature')'s Descriptor Table (at Parameter Index [0])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_SRV) have mismatching types. All descriptors of descriptor ranges declared STATIC (not-DESCRIPTORS_VOLATILE) in a root signature must be initialized prior to being set on the command list. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
That is really weird, because I suppose that the only thing that may cause this is that cbvSrvDescriptorSize is not right. It is 64, and it is set by m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);which I think should work. Besides, if I set it to another value such as 32, the application would crash.
So if cbvSrvDescriptorSize is right, why would the correct indexMesh cause the wrong offset of the descriptor handle? The consequence of this error is that it seems to be influencing my CBV which breaks the render output. Any suggestion would be appreciated, thanks!
Thanks for Chuck's suggestion, here's the code about the rootSig:
CD3DX12_DESCRIPTOR_RANGE1 ranges[3];
ranges[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 4, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC);
ranges[1].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SAMPLER, 1, 0);
ranges[2].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC);
CD3DX12_ROOT_PARAMETER1 rootParameters[3];
rootParameters[0].InitAsDescriptorTable(1, &ranges[0], D3D12_SHADER_VISIBILITY_PIXEL);
rootParameters[1].InitAsDescriptorTable(1, &ranges[1], D3D12_SHADER_VISIBILITY_PIXEL);
rootParameters[2].InitAsDescriptorTable(1, &ranges[2], D3D12_SHADER_VISIBILITY_ALL);
CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC rootSignatureDesc;
rootSignatureDesc.Init_1_1(_countof(rootParameters), rootParameters, 0, nullptr, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
ComPtr<ID3DBlob> signature;
ComPtr<ID3DBlob> error;
ThrowIfFailed(D3DX12SerializeVersionedRootSignature(&rootSignatureDesc, featureData.HighestVersion, &signature, &error));
ThrowIfFailed(m_device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&m_rootSignature)));
NAME_D3D12_OBJECT(m_rootSignature);
And here's some declarations in the pixel shader:
Texture2DArray g_textures : register(t0);
SamplerState g_sampler : register(s0);
cbuffer cb0 : register(b0)
{
float4x4 g_mWorldViewProj;
float3 g_lightPos;
float3 g_eyePos;
...
};
It's not very often I come across the exact problem I'm experiencing (my code is almost verbatim) and it's an in-progress post! Let's suffer together.
My problem turned out to be the calls to CreateConstantBufferView()/CreateShaderResourceView() - I was passing srvHeap->GetCPUDescriptorHandleForHeapStart() as the destDescriptor handle. These need to be offset to match your table layout (the offsetInDescriptorsFromTableStart param of CD3DX12_DESCRIPTOR_RANGE1).
I found it easier to just maintain one D3D12_CPU_DESCRIPTOR_HANDLE to the heap and just increment handle.ptr after every call to CreateSomethingView() which uses that heap.
CD3DX12_DESCRIPTOR_RANGE1 rangesV[1] = {{}};
CD3DX12_DESCRIPTOR_RANGE1 rangesP[1] = {{}};
// Vertex
rangesV[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_NONE, 0); // b0 at desc offset 0
// Pixel
rangesP[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_NONE, 1); // t0 at desc offset 1
CD3DX12_ROOT_PARAMETER1 rootParameters[2] = {{}};
rootParameters[0].InitAsDescriptorTable(1, &rangesV[0], D3D12_SHADER_VISIBILITY_VERTEX);
rootParameters[1].InitAsDescriptorTable(1, &rangesP[0], D3D12_SHADER_VISIBILITY_PIXEL);
D3D12_CPU_DESCRIPTOR_HANDLE srvHeapHandle = srvHeap->GetCPUDescriptorHandleForHeapStart();
// ----
device->CreateConstantBufferView(&cbvDesc, srvHeapHandle);
srvHeapHandle.ptr += device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
// ----
device->CreateShaderResourceView(texture, &srvDesc, srvHeapHandle);
srvHeapHandle.ptr += device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
Perhaps an enum would help keep it tidier and more maintainable, though. I'm still experimenting.

fltk setting button's active label color

I am using fltk 1.3.2.
I set the button's label color with
_button->labelcolor(fl_rgb_color(162, 60, 62));
but when I press the button, the color is changed.
I couldn't find the function how set the active label color.
Does anyone know how to do that?
Edit:
I use Fl::background() and Fl::foreground() functions before creating the window. This make the problem.
Edit2:
This example shows the problem.
#include <FL/Fl.H>
#include <FL/Fl_Window.H>
#include <FL/Fl_Button.H>
#include <iostream>
void HitMe(Fl_Widget* w)
{
std::cout << "Ouch" << std::endl;
}
int main(int argc, char ** argv)
{
Fl::background(0x60, 0x66, 0x60);
Fl_Window *window = new Fl_Window(320,130);
Fl_Button *b = new Fl_Button(10, 10, 130, 30, "A red label");
b->labelcolor(fl_rgb_color(162, 60, 20));
b->callback(HitMe);
window->end();
window->show(argc,argv);
return Fl::run();
}
When I commented out Fl::background() function everything is alright.
What you are seeing is the contrasting colour (see commented out code below). FLTK does this when the button is pressed. It gets the colour of the button, works out the contrasting colour based on the foreground and background colours. Have a look at the help for fl_contrast for more details.
Basically, if there is enough contrast, it will use the foreground colour otherwise it will find a contrasting colour for your background.
What can you do about it?
do nothing - that is how it is
choose a lighter background colour which will satisfy the contrast conditions
make your own button type with its own draw method
class KeepFGButton : public Fl_Button
{
public:
KeepFGButton(int x, int y, int w, int h, const char* s)
: Fl_Button(x, y, w, h, s)
{
}
void draw() {
if (type() == FL_HIDDEN_BUTTON) return;
Fl_Color col = value() ? selection_color() : color();
draw_box(value() ? (down_box() ? down_box() : fl_down(box())) : box(), col);
draw_backdrop();
// Remove the code that changes the contrast
//if (labeltype() == FL_NORMAL_LABEL && value()) {
// Fl_Color c = labelcolor();
// labelcolor(fl_contrast(c, col));
// draw_label();
// labelcolor(c);
//}
//else
draw_label();
if (Fl::focus() == this) draw_focus();
}
};
int main(int argc, char ** argv)
{
Fl::background(0x60, 0x66, 0x60);
Fl_Window *window = new Fl_Window(320, 130);
Fl_Button *b = new KeepFGButton(10, 10, 130, 30, "A red label");
...
Try the following and let me know if, when you run it, the label becomes white. If it doesn't then there is possibly something else that you are doing that is not quite right. If it does, I don't know what the problem is.
#include <FL/Fl.H>
#include <FL/Fl_Window.H>
#include <FL/Fl_Button.H>
#include <iostream>
void HitMe(Fl_Widget* w)
{
std::cout << "Ouch" << std::endl;
}
int main(int argc, char ** argv) {
Fl_Window *window = new Fl_Window(320,130);
Fl_Button *b = new Fl_Button(10, 10, 130, 30, "A red label");
b->labelcolor(fl_rgb_color(162, 60, 20));
b->callback(HitMe);
window->end();
window->show(argc,argv);
return Fl::run();
}
This is definitely too late to help the author, but in the event that this helps anyone else searching for an answer to this still relevant issue in FLTK, I'm going to offer my solution:
If you dig through the source code for FLTK 1.3.4-2 (current stable as of this post), there are a couple of built-in colormap indices which are referenced when the drawing of shadow boxes or frames are called: FL_DARK3 (for the boxes) and FL_DARK2 (for the scrollbar back color in Fl_Scroll). You can look at this FLTK documentation to see how to reset these colors to anything you wish at runtime. In particular, anticipating that this ugly red default mess will show up whenever there's a sufficiently dark background, it works well for me to just set these to a slightly lighter version of the overlayed box color:
Fl_Color boxColor = 0x88888800;
Fl_Color boxShadowColor = 0xaaaaaa00;
Fl::set_color(FL_DARK3, boxShadowColor);
Fl::set_color(FL_DARK2, boxShadowColor);
Now create your label as above and the display will be free of the red shadow. N.b. it is also possible to override the standard background2() behavior which resets FL_BACKGROUND_COLOR to the one produced by fl_contrast:
Fl::set_color(FL_BACKGROUND2_COLOR, yourBgColor);
The same trick works for other hard to reset colors like FL_INACTIVE_COLOR and FL_SELECTION_COLOR.
Hope this workaround helps.

Making a metric between colors (perception model) with "difference"

Take a look at the image below. If you're not color blind, you should see some A's and B's. There are 3 A's and 3 B's in the image, and they all have one thing in common: Their color is the background + 10% of value, saturation and hue, in that order. For most people, the center letters are very hard to see - saturation doesn't do much, it seems!
This is a bit troublesome though because I'm making some character recognition software, and I'm filtering the image based on known foreground and background colors. But sometimes these are quite close, while the image is noisy. In order to decide whether a pixel belongs to a letter or to the background, my program checks Euclidean RGB distance:
(r-fr)*(r-fr) + (g-fg)*(g-fg) + (b-fb)*(b*fb) <
(r-br)*(r-br) + (g-bg)*(g-bg) + (b-bb)*(b*bb)
This works okay, but for close backgrounds and foregrounds, it works pretty bad sometimes.
Are there some better metrics to look for? I've looked into color perception models but those mostly model brightness rather than perceptive difference which I'm looking for. Maybe one that models saturation as less effective, and certain hue differences also? Any pointers to some interesting metrics would be very useful.
As was mentioned in the comments, the answer is using a perceptual color space, but I thought I'd throw together a visual example of how the edge detection behaves in the two color spaces. (Code is at the end.) In both cases, the Sobel edge detection is performed on the 3-channel color image, and then the result is flattened to gray scale.
RGB space:
L*a*b space (image is logarithmic, as the edges on the third letters are much more significant than the edges on the first letters, which are more significant than the edges on the second letters):
OpenCV C++ code:
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "iostream"
using namespace cv;
using namespace std;
void show(const char *name, Mat &img, int dolog=0)
{
double minVal, maxVal;
minMaxLoc(img, &minVal, &maxVal);
cout << name << " " << "minVal : " << minVal << endl << "maxVal : " << maxVal << endl;
Mat draw;
if(dolog) {
Mat shifted, tmp;
add(img, minVal, shifted);
log(shifted, tmp);
minMaxLoc(tmp, &minVal, &maxVal);
tmp.convertTo(draw, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
} else {
img.convertTo(draw, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
}
namedWindow(name, CV_WINDOW_AUTOSIZE);
imshow(name, draw);
imwrite(name, draw);
}
int main( )
{
Mat src;
src = imread("AAABBB.png", CV_LOAD_IMAGE_COLOR);
namedWindow( "Original image", CV_WINDOW_AUTOSIZE );
imshow( "Original image", src );
Mat lab, gray;
cvtColor(src, lab, CV_BGR2Lab);
Mat sobel_lab, sobel_bgr;
Sobel(lab, sobel_lab, CV_32F, 1, 0);
Sobel(src, sobel_bgr, CV_32F, 1, 0);
Mat bgr_sobel_lab, gray_sobel_lab;
cvtColor(sobel_lab, bgr_sobel_lab, CV_Lab2BGR);
show("lab->bgr edges.png", bgr_sobel_lab, 1);
cvtColor(bgr_sobel_lab, gray_sobel_lab, CV_BGR2GRAY);
Mat gray_sobel_bgr;
cvtColor(sobel_bgr, gray_sobel_bgr, CV_BGR2GRAY);
show("lab edges.png", gray_sobel_lab, 1);
show("bgr edges.png", gray_sobel_bgr);
waitKey(0);
return 0;
}

j2me: using custom fonts(bitmap) performance

I want to use custom fonts in my j2me application. so I created a png file contains all needed glyph and an array of glyphs width and another for glyphs offset in PNG file.
Now, I want to render a text in my app using above font within a gameCanvas class. but when I use the following code, rendering text in real device is very slow.
Note: the text is coded(for some purposes) to bytes and stored in this.text variable. 242=[space],241=[\n] and 243=[\r].
int textIndex = 0;
while(textIndex < this.text.length)
{
int index = this.text[textIndex] & 0xFF;
if(index > 243)
{
continue;
}
else if(index == 242) lineLeft += 3;
else if(index == 241 || index == 243)
{
top += font.getHeight();
lineLeft = 0;
continue;
}
else
{
lineLeft += widths[index];
if(lineLeft <= getWidth())
lineLeft = 0;
int left = starts[index];
int charWidth = widths[index];
try{
bg.drawRegion(font, left, 0, charWidth, font.getHeight(), 0, lineLeft, top, Graphics.LEFT|Graphics.TOP);
}catch(Exception ee)
{
}
}
textIndex++;
}
Can anyone help me to improve performance and speed in my code?
At end sorry for my bad English and thanks in advanced.:)
Edit: I changed line
bg.drawRegion(font, left, 0, charWidth, font.getHeight(), 0, lineLeft, top, Graphics.LEFT|Graphics.TOP);
To:
bg.clipRect(left, top, charWidth, font.getHeight());
bg.drawImage(font, lineLeft - left, top,0)
bg.setClip(0, 0, getWidth(), getHeight());
but there was no difference in speed!!
any help please!!
Can anyone plz help me to improve my app?
text will appear after 2-3 seconds in real device by this code, I want reduce this time to milliseconds. this is very important for me.
Can I use threads? If yes, How?
I can't sure why your code's performance is not good in real device.
But, how about refer some well known open source J2ME libraries to check it's text drawing implementation for example, LWUIT.
http://java.net/projects/lwuit/sources/svn/content/LWUIT_1_5/UI/src/com/sun/lwuit/CustomFont.java?rev=1628
You can find from the above link it's font drawing implementation. It uses drawImage rather than drawRegion.
I would advice you to look into this library. The implementation is quite good and makes use of industry standard design patters ( Flyweight pattern, predominantly) and robust.

Borderless windows on Linux

Is their a standard way to make a particular window borderless on Linux? I believe that the window border is drawn by your window manager, so it may be that I just need to use a particular window manager (that would be find, I'd just need to know which one)... My hope is that all the window managers might follow some standard that allows me to do this programatically...
Using Xlib and old _MOTIF_WM_HINTS:
struct MwmHints {
unsigned long flags;
unsigned long functions;
unsigned long decorations;
long input_mode;
unsigned long status;
};
enum {
MWM_HINTS_FUNCTIONS = (1L << 0),
MWM_HINTS_DECORATIONS = (1L << 1),
MWM_FUNC_ALL = (1L << 0),
MWM_FUNC_RESIZE = (1L << 1),
MWM_FUNC_MOVE = (1L << 2),
MWM_FUNC_MINIMIZE = (1L << 3),
MWM_FUNC_MAXIMIZE = (1L << 4),
MWM_FUNC_CLOSE = (1L << 5)
};
Atom mwmHintsProperty = XInternAtom(display, "_MOTIF_WM_HINTS", 0);
struct MwmHints hints;
hints.flags = MWM_HINTS_DECORATIONS;
hints.decorations = 0;
XChangeProperty(display, window, mwmHintsProperty, mwmHintsProperty, 32,
PropModeReplace, (unsigned char *)&hints, 5);
These days NetWM/EWMH hints are preferred, but as far as I know all modern window managers still support this.
With GTK+ you can call gtk_window_set_decorated().
After a sad farewell to Compiz "window rules" I found
devilspie
A totally crack-ridden program for freaks and weirdos who want precise control over what windows do when they appear. If you want all XChat windows to be on desktop 3, in the lower-left, at 40% transparency, you can do it.
I use it to have a borderless, sticky, task-skipped terminal on my desktop.
There's also a devilspie 2 which uses Lua instead of s-expressions and claims to be better maintained.
https://live.gnome.org/DevilsPie
http://www.burtonini.com/blog/computers/devilspie

Resources