I am trying to open a new window from a JavaFX application and set its coordinates to be be inside the application by doing the following:
secondStage.setX(application.getPrimaryStage().getX() + application.getPrimaryStage().getWidth()/3);
secondStage.setY(application.getPrimaryStage().getY() + application.getPrimaryStage().getHeight()/3);
It works fine when the primary stage is on first monitor. But when it is on second monitor, the new application opens right to the left border of the second monitor, but not inside the app.
It is as if the X coordinate gets rounded to 0 with respect to second monitor when it exceeds the primary screen's X bounds.
Please let me know how I can have same functionality/behavior on second monitor as on Primary one using same logic as described by my code.
Related
I have build an application using touchGFX and STM32 based on STM32F746 Disco kit.
My application have some screens with some value to display on screen. These value can setting and change by user. Every time when these value change I will save them to eeprom.
When I turn off power and turn on again, I have read these value before start display screen, but them can't to load to screen until I pressed some button on screen (pressed button I will fresh this screen).
So my question:
How can I initiate customer value for screen and display them when startup in touchGFX
Thanks.
This is simple, just load your data in presenter->activate().
I suggest you read the article on back-end-communication. It explains how to propagate values from your Model (e.g eeprom).
https://support.touchgfx.com/4.18/docs/development/ui-development/touchgfx-engine-features/backend-communication
From TouchGFX Designer you'll find several examples that do something similar from an F769-DISCO board (sample a button and propagate that value to the UI).
I want to have an image, on top of which I can mark points (which I should be able to drag across the image).
The image is also going to be bigger than the canvas, so I want to be able to drag it as well, but while doing so keeping it in the background, and moving all the marks on it along with it.
I thought using a Group, disabling the controls of all objects and disabling selection for the Canvas would do the trick:
fabric.Object.prototype.hasControls = fabric.Object.prototype.hasBorders = false;
let fabricCanvasObj = new fabric.Canvas(canvas);
fabricCanvasObj.scene = new fabric.Group([new fabric.Image(img)]);
fabricCanvasObj.add(fabricCanvasObj.scene);
fabricCanvasObj.selection = false;
But now I want that on mouse:down a new point will be added, unless the mouse is over an already created point, so that when the user want to move that mark, no new mark is created in its place, and I want to move that individual point instead of the whole group.
But the event.target is the whole group, not the individual object in that group.
I've then followed the solution here: FabricJS Catch click on object inside group
Which suggested using
{subTargetCheck: true}
But this causes an exception to be thrown once I add an object to the group after it's been created.
How should I go about this?
A data source in Azure Maps can cluster point features (pins) within a certain radius. When a click event on such a cluster occurs I would like to reset my bounding box to the area represented by the cluster and zoom in to show individual pins within the cluster.
With Google Maps you can set the default behaviour of a cluster to auto zoom on click. This feature was also relatively easily accomplished in the old Bing Maps API. How can I add this functionality in Azure Maps without an unwieldy amount of JavaScript?
Indeed, it appears Azure Maps does not support it directly, the following approach could be considered:
Once we layer is clicked event returns pixel position of target object along with another properties. Then min and max coordinates of cluster circle is determined via atlas.Map.pixelsToPositions function:
const coordinates = e.map.pixelsToPositions([
[e.pixel[0] + (clusterRadius*2), e.pixel[1] + (clusterRadius*2)],
[e.pixel[0] - (clusterRadius*2), e.pixel[1] - (clusterRadius*2)],
]);
Then the bounds of area which might contain pins within a cluster bubble is determined via atlas.data.BoundingBox.fromPositions function:
const bounds = atlas.data.BoundingBox.fromPositions(coordinates);
And finally map viewport is set:
map.setCamera({
bounds: bounds,
padding:0
});
Here is a demo for your reference
I am testing web-aplication and I need to move element from one window to another.
Geb has clickAndHold() function but that doesn't work for me because offset works only with one window. any suggestions?
interact {
clickAndHold($('#draggable'))
moveByOffset(150, 200)
release()
}
edit: it is not possible to do it with web driver. you are restricted to only one window and can't go outside of it (draging an object from one window to another), but we managed to do it with java.awt.Robot
When I run my Windows 8 application it creates a primary tile and a secondary tile.
If I click on the primary tile it should open a URL. Clicking on the secondary tile should not open the URL and instead just a splash screen, so I added Window.Current.VisibilityChanged += Window_VisibilityChanged; to capture the event. But this gets called when you click on both tiles.
How do I find out if the event is triggered from the primary or secondary tile?
Another thing I noticed is if I click on the secondary tile it first hits the window_VisibilityChanged and also goes through the onLaunched method of App.xmal.cs.
Is there a way to figure out if the event is being fired from secondary tile before it hits the onLaunced method?