Why fonts are cutten above the "ascender" line on a Qt application running on embedded linux? - linux

UPDATE
After other tests i verified that this issue below is related to some problem in the rendering of the font.
So in the end i chose a simpler solution: i edited the font moving the "ascender" line upper in a way that now all the symbols are under this line. And now i can see alle the symbols correctly!
===========================================================================
I have some problems to correctly visualize the font Helvetica Neue LT Std Font on an embedded device. The problem is that i can't see the part of a character that is in the ascender area of the font, for example i can't see the accent of the char "À". But also for example the symbol ^ above the char "ê" is cutten. Checking the font with fontdrop.info i can confirm that the parts of the characters that are cutten are the parts in the ascender area.
If i run the application on PC, it works fine.
My embedded device is based on a iMx6 ULL processor. The screen resolution is 800x480.
Meaning of "ascender area"
I verified that the font that is displayed is the correct one. I tried to edit the font moving the symbol above the "À" under the "ascender" line and in this way the symbol is visible. So i can confirm that the problem is in the visualization of the symbols or of the parts of a character that is in the ascender area.
main.cpp
#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include <QFont>
int main(int argc, char *argv[])
{
QGuiApplication app(argc, argv);
QQmlApplicationEngine engine;
QFont font("HelveticaNeueLTStd");
// font.setStretch(QFont::Condensed);
app.setFont(font);
engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
return app.exec();
}
main.qml
import QtQuick 2.6
import QtQuick.Window 2.2
Window {
visible: true
width: 800
height: 480
// FontLoader { id: fixedFont; name: "HelveticaNeueLTStd" }
Text {
id: textEdit
text: qsTr("212Àê\u222B")
font.pointSize: 24
verticalAlignment: Text.AlignVCenter
anchors.top: parent.top
anchors.horizontalCenter: parent.horizontalCenter
anchors.topMargin: 20
// font.family: fixedFont.name
}
}
PC vs Embedded example
Maybe is not a Qt problem but is a problem in some configuration of fonts management on my embedded system? Please Help!

Related

GTK Data Capturing

I’m working in C on a project to capture data from a sensor and display it as part of a GUI application on the Raspberry Pi. I am using GTK 3.0, plus Cairo for graphing. I have built an application that works, but I want to make a modification to enable me to change the frequency of data capture.
Within my main code section I have a command like:-
gdk_threads_add_timeout (250, data_capture, widgets);
This all works, the data capture routine is triggered every 250mS, but I want to add functionality to the GUI to enable the user to change the speed. If I try to call this function from anywhere else other than main, it fails.
I have looked for other ways to do it, but I can’t find any examples or explanations of how I can do it.
Ideally what I would like is something like:-
void update_speed(button, widgets)
// Button to change speed has been pressed
read speed from GUI
update frequency
return
int main()
...
setup GUI
set default speed
start main GTK loop
Does anyone have any idea how I could achieve this?
Edit: Additional Code Snippet
(This is not the whole program, but an extract of main)
int main(int argc, char** argv) {
GtkBuilder *builder;
GtkWidget *window;
GError *err = NULL; // holds any error that occurs within GTK
// instantiate structure, allocating memory for it
struct app_widgets *widgets = g_slice_new(struct app_widgets);
// initialise GTK library and pass it in command line parameters
gtk_init(&argc, &argv);
// build the gui
builder = gtk_builder_new();
gtk_builder_add_from_file (builder, "../Visual/gui/main_window.glade", &err);
window = GTK_WIDGET(gtk_builder_get_object(builder, "main_application_window"));
// build the structure of widget pointers
widgets->w_spn_dataspeed = GTK_SPIN_BUTTON(gtk_builder_get_object(builder, "spn_dataspeed"));
widgets->w_spn_refreshspeed = GTK_SPIN_BUTTON(gtk_builder_get_object(builder, "spn_refreshspeed"));
widgets->w_adj_dataspeed = GTK_ADJUSTMENT(gtk_builder_get_object(builder, "adj_dataspeed"));
widgets->w_adj_refreshspeed = GTK_ADJUSTMENT(gtk_builder_get_object(builder, "adj_refreshspeed"));
// connect the widgets to the signal handler
gtk_builder_connect_signals(builder, widgets); // note: second parameter points to widgets
g_object_unref(builder);
// Set a timeout running to refresh the screen
gdk_threads_add_timeout(SCREEN_REFRESH_TIMER, (GSourceFunc)screen_timer_exe, (gpointer)widgets);
gdk_threads_add_timeout(DATA_REFRESH_TIMER, (GSourceFunc)data_timer_exe, (gpointer)widgets);
gtk_widget_show(window);
gtk_main();
// free up memory used by widget structure, probably not necessary as OS will
// reclaim memory from application after it exits
g_slice_free(struct app_widgets, widgets);
return (EXIT_SUCCESS);

Different wallpaper for each monitor in a multi-monitor setup in Windows 10

There are a number of questions and answers about setting wallpapers programmatically on multi-monitor setups in Windows, but I'm asking specifically for Windows 10 (and maybe Windows 8) because it seems to work differently from all the explanations I found.
Raymond Chen has an article "How do I put a different wallpaper on each monitor?" (https://devblogs.microsoft.com/oldnewthing/?p=25003), also quoted in Monitors position on Windows wallpaper. The core concepts is that Windows places the top-left corner of the provided bitmap at the top-left corner of the primary monitor, and wraps around to fill any desktop space to the left and/or above that. I understand that, I wrote a little program using that knowledge, and it works beautifully in Windows 7.
How it works: I create a bitmap that conceptually covers the whole desktop space, as the user sees it. I draw the contents of each monitor to that bitmap in its appropriate position (the program is written in C++ using VCL, but the principle remains the same in other programming environments):
TRect GetMonitorRect_WallpaperCoords(int MonitorNum)
{
Forms::TMonitor *PrimaryMonitor = Screen->Monitors[0];
Forms::TMonitor *Monitor = Screen->Monitors[MonitorNum];
// Get the rectangle in desktop coordinates
TRect Rect(Monitor->Left, Monitor->Top, Monitor->Left + Monitor->Width, Monitor->Top + Monitor->Height);
// Convert to wallpaper coordinates
Rect.Left += PrimaryMonitor->Left - Screen->DesktopLeft;
Rect.Top += PrimaryMonitor->Top - Screen->DesktopTop;
Rect.Right += PrimaryMonitor->Left - Screen->DesktopLeft;
Rect.Bottom += PrimaryMonitor->Top - Screen->DesktopTop;
return Rect;
}
std::unique_ptr<Graphics::TBitmap> CreateWallpaperBitmap_WallpaperCoords()
{
std::unique_ptr<Graphics::TBitmap> Bmp(new Graphics::TBitmap);
Bmp->PixelFormat = pf24bit;
Bmp->Width = Screen->DesktopWidth;
Bmp->Height = Screen->DesktopHeight;
// Draw background (not that we really need it: it will never be visible)
Bmp->Canvas->Brush->Style = bsSolid;
Bmp->Canvas->Brush->Color = clBlack;
Bmp->Canvas->FillRect(TRect(0, 0, Bmp->Width, Bmp->Height));
for (int MonitorNum = 0; MonitorNum < Screen->MonitorCount; ++MonitorNum)
{
TDrawContext DC(Bmp->Canvas, GetMonitorRect_WallpaperCoords(MonitorNum));
DrawMonitor(DC);
}
return Bmp;
}
(The draw context uses a coordinate translation rect so that the code int DrawMonitor function can draw in a rectangle like (0, 0, 1920, 1080) without having to wonder where in the full bitmap it is drawing, and with a clip rect so that DrawMonitor can not accidentally draw outside of the monitor it's drawing on).
Then I convert that bitmap to an image that will properly wrap around when placed at the top-left corner of the primary monitor (as Raymond Chen describes in his article):
std::unique_ptr<Graphics::TBitmap> ConvertWallpaperToDesktopCoords(std::unique_ptr<Graphics::TBitmap> &Bmp_WallpaperCoords)
{
std::unique_ptr<Graphics::TBitmap> Bmp_DesktopCoords(new Graphics::TBitmap);
Bmp_DesktopCoords->PixelFormat = Bmp_WallpaperCoords->PixelFormat;
Bmp_DesktopCoords->Width = Bmp_WallpaperCoords->Width;
Bmp_DesktopCoords->Height = Bmp_WallpaperCoords->Height;
// Draw Bmp_WallpaperCoords to Bmp_DesktopCoords at four different places to account for all
// possible ways Windows wraps the wallpaper around the left and bottom edges of the desktop
// space
Bmp_DesktopCoords->Canvas->Draw(Screen->DesktopLeft, Screen->DesktopTop, Bmp_WallpaperCoords.get());
Bmp_DesktopCoords->Canvas->Draw(Screen->DesktopLeft + Screen->DesktopWidth, Screen->DesktopTop, Bmp_WallpaperCoords.get());
Bmp_DesktopCoords->Canvas->Draw(Screen->DesktopLeft, Screen->DesktopTop + Screen->DesktopHeight, Bmp_WallpaperCoords.get());
Bmp_DesktopCoords->Canvas->Draw(Screen->DesktopLeft + Screen->DesktopWidth, Screen->DesktopTop + Screen->DesktopHeight, Bmp_WallpaperCoords.get());
return Bmp_DesktopCoords;
}
Then I install that bitmap as a wallpaper by writing the appropriate values in the registry and calling SystemParametersInfo with SPI_SETDESKWALLPAPER:
void InstallWallpaper(const String &Fn)
{
// Install wallpaper:
// There are 3 name/data pairs that have an effect on the desktop wallpaper, all under HKCU\Control Panel\Desktop:
// - Wallpaper (REG_SZ): file path and name of wallpaper
// - WallpaperStyle (REG_SZ):
// . 0: Centered
// . 1: Tiled
// . 2: Stretched
// - TileWallpaper (REG_SZ):
// . 0: Don't tile
// . 1: Tile
// We don't use the Wallpaper value itself; instead we use SystemParametersInfo to set the wallpaper.
// The file name needs to be absolute!
assert(Ioutils::TPath::IsPathRooted(Fn));
std::unique_ptr<TRegistry> Reg(new TRegistry);
Reg->RootKey = HKEY_CURRENT_USER;
if (Reg->OpenKey(L"Control Panel\\Desktop", false))
{
Reg->WriteString(L"WallpaperStyle", L"1");
Reg->WriteString(L"TileWallpaper", L"1");
Reg->CloseKey();
}
SystemParametersInfoW(SPI_SETDESKWALLPAPER, 1, Fn.c_str(), SPIF_UPDATEINIFILE | SPIF_SENDCHANGE);
}
But when I test it in Windows 10, it doesn't work properly anymore: Windows 10 puts the wallpaper completely in the wrong place. Seeing as other people have asked questions about multi-monitor wallpapers in the past, I'm hoping there are people with experience of it on Windows 10.
As far as I can see, Windows 10 places the top-left corner of the provided bitmap at the top-left corner of the desktop space (by which I mean the bounding rectangle of all monitors), instead of the top-left corner of the primary monitor. In code, that means: I leave out the ConvertWallpaperToDesktopCoords step, and then it works fine as far as I can see.
But I can't find any documentation on this, so I don't know if this is officially explanation of how Windows 10 does it. Use with care. Also I don't know when this different behavior started: in Windows 10, or maybe earlier in Windows 8.

Susy and responsive images

i don't understand something with grid mixin.Actually i use this code, but only work on mobile, on desktop, the image container doesn't get the full size.
.vue-diapo-hp-img{
//#include set-container-width();
img{#include adaptable-img;}
#include at-breakpoint ($mobile){
#include set-container-width();
}
#include at-breakpoint ($desktop){
#include set-container-width();
//#include span-columns($desktop);
}
#include at-breakpoint ($tablet){
#include set-container-width();
}
}
The html code is (drupal views source):
<div class="vue-diapo-hp-img"> <img class="imagefield imagefield-field_diapo_home_pano" width="990" height="204" src="http://sandboxd6-1.vmsbx/sites/sandboxd6-1.vmsbx/files/diapo_home/site-date-yyyy/site-date-ww/gabarit-diapo-home-er.jpg?1383819992"> </div>
<div id="transparency"></div>
<div id="contenu-diapo">
<h2></h2></div>
what can i do to get both breakpoint works ? what is the difference between
#include set-container-width(); and #include span-columns($desktop); for example ?
you can see it in action : http://d6sbx1.pfdev.tk/
thanks
EDIT1: modified code after explaination
.vue-diapo-hp-img{
clear: both;
img{#include adaptable-img;}
#include at-breakpoint ($mobile){
#include set-container-width();
}
#include at-breakpoint ($desktop){
#include set-container-width();
//#include span-columns($desktop);
}
#include at-breakpoint ($tablet){
#include set-container-width();
}
}
this doesn't do responsive image on mobile but give full width on desktop.
set-container-width sets a width (or max-width) value based on the total-columns, plus grid-padding. span-columns sets the width of an item inside that container - often a relative width unless you have set your $container-style to static (which you haven't, in this case). span-columns also applies floats, and margins to align items on your grid. With a static grid and no grid-padding, they might return the same width, but they are built to calculate different objects, so they use different logic underneath.
But it looks to me like you already have a container in place, and you simply want to span the full width of that container - in which case you don't need either one. All block elements, including your image-wrapper, span full width by default. If you need to clear a float above, you can just clear: both to do that. And sine you want that at every breakpoint, there's no reason to use media-queries either:
.vue-diapo-hp-img{
clear: both;
}
That should be all you need. At most, you could add width: 100% if (for some reason) it's not spanning the full width.

fix a splash screen image to display in j2me

In my j2me App I have tried canvas which works great on Nokia phone but doesn't run on samsung. For that I have to switch to some FORM which in both cases works but only issue is of size, if I create smaller image to fit for both phone screens, one (samsung) shows that ok but other (nokia) leaves a lot more space and vice versa.
I need to have code that could stretch my image and just fix if to the screen size which I basically get by form.getHeight() and form.getWidth() property. I wonder if there is property of Image.createImage(width, height) then why doesn't it stretch it to the value I provide?
my code for that is below
try {
System.out.println("Height: " + displayForm.getHeight());
System.out.println("Width: " + displayForm.getWidth());
Image img1 = Image.createImage("/bur/splashScreen1.PNG");
img1.createImage(displayForm.getHeight(), displayForm.getWidth());
displayForm.append(new ImageItem(null, img1, Item.LAYOUT_CENTER, null));
} catch (Exception ex) {
}
Image
A single image will not fit all screens. But more will do.
The smaller logo image should be less than 96x54, as this is the smallest screen resolution. This image can be used up to the resolution of 128x128 without problems. With bigger resolutions it will look tiny, though.
The bigger logo image should be a bit bigger than 128x128 and can be used up to 240x320.
The code bellow gives as example of how to implement this.
import javax.microedition.lcdui.Image;
import javax.microedition.lcdui.Graphics;
class Splash extends javax.microedition.lcdui.Canvas {
Image logo;
Splash () {
if (getWidth() <= 128) {
// sl stands for Small Logo and does not need to have a file extension
// this will use less space on the jar file
logo = Image.createImage("/sl");
} else {
// bl stands for Big Logo
logo = Image.createImage("/bl");
}
}
protected void paint (Graphics g) {
// With these anchors your logo image will be drawn on the center of the screen.
g.drawImage(logo, getWidth()/2, getHeight()/2, Graphics.HCENTER | Graphics.VCENTER);
}
}
As seen in http://smallandadaptive.blogspot.com.br/2008/10/showing-splash.html
wonder if there is property of Image.createImage(width, height) then why doesn't it stretch it to the value I provide?
Parameters in this method have nothing to do with stretching, see API javadocs:
public static Image createImage(int width, int height)
Creates a new, mutable image for off-screen drawing.
Every pixel within the newly created image is white.
The width and height of the image must both be greater than zero.
Parameters:
width - the width of the new image, in pixels
height - the height of the new image, in pixels
Image class (API javadocs) has two more createImage methods that use parameters called "width" and "height" - one with six, another with four arguments but none of these has anything to do with stretching.
In createImage with six arguments, width and height specify size of the region to be copied (without stretching) from source image.
In method with four arguments, width and height specify how to interpret source ARGB array, without these it would be impossible to find out if, say, array of 12 values represents 3x4 image or 4x3. Again, this has nothing to do with stretching.

Draw a line in mfc with help of toolbar

I am trying to make a paint application in MFC using visul basic c++ 6.0 i have already created a window using Create function and also have created a toolbar with a tool line but i am stuck on how to code for the line because the function i know goes like d.lineTo(x,y) and d.Moveto(x2,y2) but it comes under the line function how do i use OnLButtonDown to Trap the co-ordiantes or is there any other way i can draw a line ..? any help will be useful
have a look at the MFC Scribble tutorial :
http://msdn.microsoft.com/en-us/library/aa716527%28v=vs.60%29.aspx)
It will get you started on how to handling mouse click and mouse move and drawing.
M.
Ok, you're going to have to override several member functions to do this. I've outlined an approach below. My example below deals with a single line-drawing operation (from mouse down, to mouse up). An exercise for you, is to make it so that once you've done one, you can then do another at a different place. It's easy, btw!
CWnd::OnLButtonDown(UINT _flags, CPoint _pt);
CWnd::OnLButtonUp(UINT _flags, CPoint _pt);
CWnd::OnMouseMove(UINT _flags, CPoint _pt);
CWnd::OnPaint()
Apologies if some of these function signatures are wrong! Add some members to your window class:
// at the top of your file
#include <vector>
// in your class
typedef std::vector<POINT> PointVector;
PointVector m_Points;
CYourWnd::OnLButtonDown(UINT _flags, CPoint _pt);
{
// NOTE: For more than one set of drawing, this will be different!
m_Points.clear();
m_Points.push_back(POINT(_pt.x, _pt.y));
}
CYourWnd::OnMouseMove(UINT _flags, CPoint _pt);
{
if(_flags & MK_LBUTTON)
{
const POINT& last(m_Points.back());
if(_pt.x != last.x || _pt.y != last.y)
{
m_Points.push_back(POINT(_pt.x, _pt.y));
Invalidate();
}
}
}
CYourWnd::OnPaint()
{
CPaintDC dc(this);
CRect rcClient; GetClientRect(&rc);
FillSolidRect(&rcClient, RGB(255, 255, 255));
if(m_Points.size())
{
dc.MoveTo(m_Points[0].x, m_Points[0].y);
for(PointsVector::size_type p(1);
p < m_Points.size();
++p)
dc.LineTo(m_Points[p].x, m_Points[p].y);
}
}
Obviously, this is crude and gives you a single drawing operation. Once you click the left button down again, it erases what you've done. So, once you have this working:
Make it so you can draw an unlimited amount of lines. You could accomplish this in several ways such as an additional container (to store vectors), or even drawing-operation classes that you can store in a single vector and then execute.
This solution may well flicker. How might you stop this? Perhaps OnEraseBkgnd holds the clue...
How about more colours?
All signs point towards creating some drawing-classes that encapsulate this for you, but I hope this has got you started.

Resources