In Visio one of the most fundamental and frustrating annoyances I'm facing is not being able to anchor to the sloping face of the diamond shape.
My organization commonly uses this shape in flow diagrams.
Whether it's a densely connected logic point, which needs more than the 5-6 anchors...
...or a self connecting loop like this:
...I often want to connect to the sloping faces of the diamond shape, but can't seem to find a way to successfully anchor to the sloping part of the shape.
Currently I'm settling for connecting one side, but that leads to frustration when things are repositioned.
I've also explored the Data and Format Shape menus presented by right clicking the shape and the path I'm looking to connect to it as those sounded promising. However, examining those menus, I haven't found anything that looks close to what I need yet.
Seems like there must be a way to do this, though.
Update
I've also tried to redraw the diamond w/ a Pencil in the ribbon's Tools pane under the Shapes dropdown ... but did not have any luck anchoring to the result.
I also have clarified that my question is relating to a basic anchoring need, given my initial example's deviation from traditional UML.
Overview
To anchor to somewhere on a shape face other than the defaults, simply add connector points. (Shift+Ctrl+1, by default)
If this is a common issue, create a master shape with additional connection points to reduce time cost.
Adding Connectors
In Visio 2010 or later:
1.) Enabling Connection Points
Under Views tab, be sure Connection Points in Visual Aids group is CHECKED:
!! NOTE: If this step is ignored, attempts to add points may fail.
2.) Enter Connection Point Edit Mode
Either enter press Shift + Ctrl + 1 or go to Tools group in Home tab and click the x (Connection Point) to enter appropriate edit mode.
3.) Select Shape
Click to select the shape to be edited.
4.) Add a point
Hold Ctrl and then click again on the desired position along the face of the selected shape to add a point.
(Visio 2013 -- after adding a point)
The point is depicted in Visio 2010 as a magenta 'x', while it's depicted as a red square in Visio 2013. The shape itself is thinly outlined, w/ pre-existing connection points shown as blue 'x's in Visio 2010; for Visio 2013, it's instead depicted as a gray bounding box, w/ pre-existing points show in gray for unselected shapes.
You must select the shape before adding the points, however, once selected as many points as are desired may be added.
BEWARE -- once a shape has been selected, you can add connections on other shapes nearby, as well, leading to potentially weird routing.
Complete!
If you don't mind: you are not asking a UML, but a Visio drawing question.
However, I answer in UML context: your drawing does not make sense. Removing the No path will just make it a more valid one. Then it should be an Action called Wait for something that continues only when a something happens. You take a decision only if there's something to decide, not to stop the control flow until an event happens.
In response to your chat question (use of UML): Everything depends. Whether or not you stick with the UML specification (actually the ISO source is available for free at the author's site) is your decision. UML itself leaves great ways to adapt the language to your domain by using profiles. Whenever you deviate from the standard you have to document that and people need to be trained accordingly.
I have to admit that the UML specification is no bedtime lecture. However, there are great sources to learn from (e.g. lots of examples are found here). I for myself work with UML in practice for more than 20 years and have to say that it was worth the time learning it. Always remember that UML is a language and like any language it needs to be spoken actively to convey ideas effectively. Here in Germany we have so many dialects and a general High German. People with a certain idiom can talk to each of their peers without issue, but people from north and south are better served to use the common idiom since their own dialects differ quite fundamentally.
Related
I'm a big fan of PaperJS, however, the library doesn't see much activity so we're looking at other tools, like KonvaJS, Fabric, and Pixi. We'd like to replicate the example here:
http://paperjs.org/examples/path-simplification/
in KonvaJS but we're not sure which class is the most appropriate? Should we use the line, which is described as a collection of points with tension, the path tool which is what we use in Paper, or the shape class? Does KonvaJS offer the same type of access to the bezier curve tools and shape border, blue line, found in the above-mentioned paper example?
Konva.Line requires a list of x & y passed into its points property as a simple array, then draws straight lines connecting those points. The tension property can be used to make the straight line joins more curvy.
Konva.Path expects you to provide a data property that is more like a list of SVG drawing instructions, so move, lineto, arc, etc. (See supported instructions list in Konva docs for Konva.Path.data here)
There is no built-in equivalent path-editing features to those in the demo you linked to - so no automatic anchors on the path control points and no Bezier handles. You would have to DIY those. Having said that, it would all be achievable - what I mean is the drawing of the control anchors and lines, the listening for mouse and drag events, and the final passing back of the SVG drawing data to the Konva.Path shape when the path's edit mode ends are all well supported in Konva.
As at May 2022, the Konva lib is well supported, with appropriately frequent (as Goldilocks would want - not too many and not too few), no ill-thought-out breaking changes, issues are responded to, SO posts replied to, and there is a busy Discord channel.
I'm trying to create an orbiting menu. For that it needs to follow the user around
but it also needs a follow threshold for the user to interact with every corner of it.
From a design perspective (Hololens, Hololens 2, MR,...) is it a good idea to have a floating menu?
Menu design in XR is a complex topic so maybe deserves its own question as suggested by Julia.
In short, we've seen mixed results in our user testing with floating menus. They "work" in the sense that the user can always access the menu even while walking around in experiences that cover large areas.
If your experience is confined to a smaller space, I'd recommend looking into another interaction paradigm, either using a wall, table or floor placed interaction menu.
If you do want to be able to roam freely, a floating menu may be the answer. You can then use a basic tagalong script or configure a solver (body lock) like in this example by Dong Yoon Park (under section Solver System): https://medium.com/#dongyoonpark/open-source-building-blocks-for-windows-mixed-reality-experiences-hololens-mixedrealitytoolkit-28a0a16ebb61
Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId
Apologies if there is a thread for this already, I couldn't find one that I could get my teeth into.
Anyway, I'm new to WPF and want to create a custom control that will be a sort of graphic control. The graphic will always consist of a circle, containing a matrix of several squares (from several hundred to several thousand actually) The squares need to respond to mouse click and mouse over events (and ideally be possible to navigate/select via keyboard.) Each square will represent an object I've coded.
In the past I've used a grid control to display the coloured squares (with VCL in CBuilder) but I would like to make a graphical version. (Actually, another question I'd like to ask is, is there a WPF grid control where I can set the colours of individual cells?)
The question is, where to start? Do I start with a canvas and draw on it? Do I derive from an existing object? I'm just a little lacking on ideas on implementation so any pointers or advice you can offer will be greatly received.
BBz
First off I would suggest getting a decent handle on WPF and how it approaches the problem set. It is vastly different from previous .NET Desktop technologies such as WinForms. Once you have a decent understanding in regards to the separation of logic from UI and how WPF approaches the problem then you can dive in and begin making the right decisions based upon what you encounter.
The problem you mention can be solved in multiple ways. In regards to your question about making use of a Grid, that could be done as that is a layout type. It is vastly superior to the Canvas in terms of arranging your visual structure. The defined rows/columns are nothing more then containers which can hold varying UI objects. Therefore pushing a Rectangle into the Grid and coloring as desired would give you the effect you are looking for. This Rectangle could then become a custom control which would allow you to define varying properties on, as well as specific triggers for mouse overs, etc...
At a higher level you will want to encapsulate this logic as a UserControl which will also hold your custom control. Perhaps the UserControl contains the Grid which will make use of your custom control.
Hopefully this gives you some ideas around how to get started, however getting a better understanding of WPF will help you immensely in achieving your goal.
I work for a ticketing agency and we print out tickets on our own ticket printer. I have been straight coding the ticket designs and storing the templates in a database. If we need a new field adding to a ticket I manually add it and use the arcane co-ordinate system to estimate where the fields should go and how much the other fields need to move by to accomodate new info.
We always planned to make this system automate with a simple (I stress the word simple) graphical editor. Basically we don't forsee tickets changing radically in shape any time soon, we have one size of ticket and the ticket printer firmware is super simple because it's more of an industrial machine, it has about 10 fonts and some really basic sizing interactions.
I need to make this editor display a rectangle of the dimensions by pixel of the tickets (can even be actual size) and have a resizable grid which can toggle between superimposition and invisibility on top of the ticket rectangle and represented by dots rather than lines.
Then I want to be able to represent fields by drawing rectangles filled with the letter "x" that show the maximum size of the field (to prevent overlaps). These fields should be selectable, draggable and droppable in a snap to grid fashion.
I've worked out the maths of it but I have no idea how to draw rectangles and then draw grids in layers and then put further rectangles full of 'x'es on top of those. I also don't really know much about changing drawn positions in accordance with mouse events. It's simply not something I've ever had to do.
All the tutorials I've seen so far presume that you already know a lot about using the draw objects and are seeking to extend a basic knowledge of these things. I just need pointing in the direction of a good tutorial in manipulating floating objects in a picturebox in the first place.
Any ideas?
For those of you in need of a guide to this unusual (at least those of us with a BIS background) field I would heartily endorse:
https://web.archive.org/web/20141230145656/http://bobpowell.net/faqmain.aspx
I am now happily drawing graphical interfaces and getting them to respond to control inputs with not too much hassle.