I want to make a simple paint program on visual c++ which allows the user to draw a path of a series of straight lines which follow on from each other. Once the user is done this, they should double click to stop drawing. It is important that I record the co-ordinates of the beginning and end points of each line of the path because I want to use this information to find the magnitude and direction of each line using simple math. Please can someone give me somewhere to start and any other guidance.
You should start with a tutorial in: MFC.
Learn the basics: Document/View architecture and
how painting is done (GDI and device contexts).
Basically, you should:
1. create an MFC application (SDI - single document interface),
2. Handle the OnLButtonDown (WM_LBUTTONDOWN), OnMouseMove (WM_MOVE), OnLButtonUp (WM_LBUTTONUP).
3. Maintain an dynamic array/List (TypedPtrList) of the points
4. handle the double-click event for detecting completion.
You should use the Invalidate() function on (after) each click, in order to see the changes
on the screen.
That's just a little bit of information to get you started
You'll want:
a class or struct to represent a point (if you make it a class, it could have computation methods that would, for example, calculate the distance and direction to another point)
a member variable: an instance of a container class (list, array, etc) to hold your points
a member variable: a boolean flag to represent whether you are drawing or not (starting with not)
and you'll need to handle:
the mouse click event to instantiate a point and add it to your container
the mouse move event to draw a line from the last point to the current mouse position if the drawing flag is true
the mouse double-click event to add the double-click location to your container of points and turn off the drawing flag
Yaron's strategy doesn't draw lines until 2 points are clicked. Mine uses "rubberbanding" to anchor the first end of the line then let the second end follow your cursor until you click to anchor it down. Use whichever one you like better.
if i were you i would use Qt.
Qt widgets are great for user interface. you should check qt examples...
if you want to make an image processing behind, you can use imagemagick library.
this library is great for any image manipulation.
Related
We want to attach some UI and other items to the back of the articulated hand. Just trying to figure out how to do that. I have found how to turn on and off the hand visualizer through MixedRealityHandTrackingProfile but I'm trying to find the Unity Game Object I can parent the items to or at least a way to access the hand transform. Thanks for any pointers!
Step 1: Select the object in the scene hierarchy that you want to follow your hand. Click “add component” in the inspector panel.
Step 2: Type in “RadialView” in the search box and you should see the RadialView solver appear. Click on it. You will see a few additional required scripts appear automatically.
Note: it adds the solver handler script. Along with that, the Radial View script will show up as well just like the orbital script.
Step 3: Change the radial view to not follow the head but follow the left hand. Select the dropdown menu next to the “tracked object to reference” option. Then select “hand joint left” from the menu.
Step 4: As you may see, once you select the hand joint you can choose which part of the hand you want the cube to follow. There are a lot of options to use! For this example, we are going to use the wrist. So next to the option “tracked hand joint” click the dropdown menu and select wrist.
Note: Not all joints, in this current version of the HoloLens2 can be tracked. This is a bug that may be fixed in the near future.
Now if you press play and try it out in your scene, you will see that the object does follow the wrist, but the object may lag a little bit behind and looks like it’s struggling to keep up. Now to fix this and make it so that the object is with the wrist at all time we must change a few things. Set the maximum and minimum distances to 0 so that the cube will not have any distance between it and the user’s wrist. Once set, the cube will be perfectly aligned with the wrist.
In the latest mrtk_development branch as of PR 4532 you can also used the "Hand Constraint" component. You can see an example of how to use it at MixedRealityToolkit.Examples/Experimental/HandTracking/Scenes/HandBasedMenuExample.unity.
Have a look at Assets/MixedRealityToolkit.SDK/Experimental/Features/Utilities/Solvers/HandConstraint.cs for the implementation.
You can add this behavior by adding a "Hand Constraint" solver to the object that you wish to follow the hand.
The Hand Constraint component will also be available in the upcoming MRTK V2.0.0 RC2 release.
I'a currently working on a small utility, it's my first ever X project. The utility is used to draw a small circle around your mouse pointer. I use an app called Pinpoint to do the same on my Mac, it helps me find my mouse as I'm visually impaired.
The utility creates an transparent X window and draw a circle inside, it then moves that window with the mouse pointer so that the circle follows the mouse.
It currently works, except for one detail. Mouse events are not propagated up to the underlying windows. Basically, the utility makes the mouse useless.
As far as I can tell from the Xlib docs, if not otherwise specified, new windows should propagate all events. How can I fix this?
The code can be found on GitHub: https://github.com/blubber/circle-cursor it's a bit messy currently, becaue it is just a proof of concept.
I would suggest doing via cursor image as well, there are many ways when you won't be able to receive mouse events and only possible source would be polling with XQueryPointer.
With xfixes extension you can subscribe to all cursor image changed events and get most recent shape of the cursor, and whit XRender you can set your own ( possibly animated cursor )
I'm pretty new to javafx so I'm trying to learn here so please be reasonable and don't dis away my question, I really appreciate any help at all, thanks!
I would like to know how I could move an object, let's say this circle on different events, like keypress or mouseclick, mousemove, whatever.
Circle circle = new Circle();
circle.setCenterX(100.0f);
circle.setCenterY(100.0f);
circle.setRadius(50.0f);
Do I need to use that KeyFrame thing I saw on the javafx site tutorial, or how does this work?
I would not have asked this here if I weren't feeling so lost, honestly.
So to make this clear: What is the code for moving objects that I created, by using events?
EDIT: By moving it I mean, press up key and it moves up by a few pixels, transform it maybe, with another key, or click somewhere on the scene and make it move there instantly or travel there with a certain speed. I don't have to redraw it like you need to with html5 canvas, I hope, right?
I don't have to redraw it like you need to with html5 canvas, I hope, right?
Not if you are using a standard JavaFX scene graph as opposed to a JavaFX canvas.
I would like to know how I could move an object, let's say this circle on different events, like keypress or mouseclick, mousemove, whatever
There are three ways to move a Shape:
You can adjust the shape's geometry (e.g. the centerX/centerY properties of a circle).
You can adjust the shape's layout (e.g. it's layoutX/layoutY properties).
You can adjust the shape's translation (e.g. it's translateX/translateY properties).
You can think of the layout as the home position for the object; i.e. where it should normally be in the context of it's parent group. You can think of it's translation transform as a temporary position for an object (often used when the object is being animated).
If you are using a layout pane such as a VBox or TilePane, then the layout pane will handle setting the layout co-ordinates of the child node for you. If you are using a simple Group or a plain Pane or Region, then you are responsible for setting the correct layout values for the child nodes.
To listen for events, set event handlers on Nodes or Scenes.
Here is a small sample app which demonstrates the above. It places the object to be moved inside a Group and modifies the position of the object within a Group in response to various events.
I'd like to write a Linux screen magnifier that's customized to my liking. Ideally, the magnified window would be a square about 150 pixels wide that follows the mouse cursor wherever it goes.
Is it possible to do this in X11? Would it be easier to have an application window that follows the mouse around, or would it be better (or possible) to forget about the window altogether and just make the mouse pointer a 150x150 square that magnifies whatever's underneath?
Look at the source to xeyes?
This actually already exists, it's called Xmag (do a Google search for additional info). You might want to check out the source code for it if you want to know how it works.
EDIT: looks like I misread your question a little bit... if you want a magnified square to follow the mouse pointer around, I suppose it should be possible, but I don't know the technical details of how you'd do it. Regardless, the place to start is probably by looking at Xmag as a starting point.
I am unsure if this can run as its own app or would have to be integrated into your window manager. Either way, you would need libx11 (might have a different name from distro to distro). Also, I would suggest taking a look at swarp. I know this is not even close to what you are talking about, but the source code is only 35 lines and it shows what can be done with libx11.
I would personally make that a frameless window that always stays atop with a 1px hole in the middle. The events that the user makes (Mouse clicks, keypresses, whatever) is passed to the window below.
And when the user moves it's cursor it is ought to be visible to your window and you just move it over a bit. For the magnifying part, well - that is left as an exercise to the reader (Because I do not know how to do that as of yet ;-).
Texworks comes with such a feature to inspect the pdf resulting from typesetting a latex source. You can also choose between a square or a circular magnifier. See https://www.tug.org/texworks/ for access to the code which can serve a launchpad.
I have several nested X Windows - let's say - a scrollable window within a scrollable window (see the example below). In such case the main window contains (at least) the major scroll bars and the (major) drawing area they control. This drawing area on its turn contains (at least) a scrollable window batch - a (minor) main window, containing a scroll bar and minor drawing area.
During live scrolling of an inner drawing area the redraw procedure messes up, because I am using the XCopyArea to speed the process and move the contents that are valid and invoke the actual redraw routine for just the newly appeared content. This works fine when the inner drawing batch is by itself, but when nested within another one a problem occurs - when the inner scrolling-batck is partially visible (i.e. the major drawing area is scrolled) redrawing of newly appeared contents is clipped from the major drawing area and never actually redrawn, but considered to be so. When on the next scroll XCopyArea gets this supposedly-redrawn area it is actually empty. Finally this empty area show up on the partially visible inner scrolling-batch and it is empty. On the first general redraw message they are fixed.
If I can obtain the clipping mask for what is actually visible from (my) inner drawing area I can adjust the XCopyArea() call and redraw call and overcome the problem without the plan "B" which is redrawing all contents on each scroll bar movement.
Example: Developing a plugin for Mozilla Firefox and needing to determine the region that describes the visible area of "my" window, i.e. the one that is passed from the Mozilla system as plugin viewport.
If its really an X Window you get, and not a widget from some specific toolkit (like GTK+ maybe?) then you can use the XGetWindowAttributes function call.
This fills out a provided XWindowAttributes structure, which includes integers for the x and y position of the window as well as its width and height and other useful facts.
But in reality I think you are probably using the Mozilla plugin API inherited from Netscape, aka NSAPI, and in that case what you get is a call to your function NPP_SetWindow() at least once (and again if necessary because something changed) with a structure which contains the information you're looking for. Try looking at http://www.mozilla.org/projects/plugins/ for more information about the APIs you should use.