Really Basic Graphics in C# 2.0 Tutorials - graphics

I work for a ticketing agency and we print out tickets on our own ticket printer. I have been straight coding the ticket designs and storing the templates in a database. If we need a new field adding to a ticket I manually add it and use the arcane co-ordinate system to estimate where the fields should go and how much the other fields need to move by to accomodate new info.
We always planned to make this system automate with a simple (I stress the word simple) graphical editor. Basically we don't forsee tickets changing radically in shape any time soon, we have one size of ticket and the ticket printer firmware is super simple because it's more of an industrial machine, it has about 10 fonts and some really basic sizing interactions.
I need to make this editor display a rectangle of the dimensions by pixel of the tickets (can even be actual size) and have a resizable grid which can toggle between superimposition and invisibility on top of the ticket rectangle and represented by dots rather than lines.
Then I want to be able to represent fields by drawing rectangles filled with the letter "x" that show the maximum size of the field (to prevent overlaps). These fields should be selectable, draggable and droppable in a snap to grid fashion.
I've worked out the maths of it but I have no idea how to draw rectangles and then draw grids in layers and then put further rectangles full of 'x'es on top of those. I also don't really know much about changing drawn positions in accordance with mouse events. It's simply not something I've ever had to do.
All the tutorials I've seen so far presume that you already know a lot about using the draw objects and are seeking to extend a basic knowledge of these things. I just need pointing in the direction of a good tutorial in manipulating floating objects in a picturebox in the first place.
Any ideas?

For those of you in need of a guide to this unusual (at least those of us with a BIS background) field I would heartily endorse:
https://web.archive.org/web/20141230145656/http://bobpowell.net/faqmain.aspx
I am now happily drawing graphical interfaces and getting them to respond to control inputs with not too much hassle.

Related

How do I select walls along a particular axis, among all the other walls, in Revit using the Revit API?

I want to change the height of all walls but the length of walls only in a particular axis, for instance, along the x-axis.
Consecutively, could you also tell how I could alter the similar dimensions for a house? Where there are connected walls?
I see nothing in this code that means it does not work.
However, it seems to me that it does not make much sense.
One would seldom constrain all wall heights to be user defined to a certain value; instead, in most Revit models, walls are constrained to reach from a bottom level to a top level. Then, if the height of all walls needs to be modified, you would modify the elevation of the top level only.
The logic of the code guarantees that the wall location line will only be modified if the newWallLine equals XYZ.BasisX. This may never be the case, since the line is a Line object and the vector an XYZ.
I would recommend researching exactly what you wish you achieve and how to do so manually in the end user interface before addressing the task programmatically.
In general, if a feature is not available in the Revit product manually through the user interface, then the Revit API will not provide it either.
You should therefore research the optimal workflow and best practices to address your task at hand manually through the user interface first.
To do so, please discuss and analyse it with an experienced application engineer, product usage expert, or product support.
Once you have got that part sorted out, it is time to step up into the programming environment.
I hope this clarifies.

Detecting Handedness from Device Use

Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId

What can be causing problems for some of my KML's not rendering from Fusion Tables?

I have 3 KML's that do not draw at all and 2-3 that act sporadically depending on what zoom level they are at. I checked the file limitations and I don't seem to be violating any of the limits. I went back to my original shapefiles to check for geometry errors. One of the files had geometry errors and I fixed them yet it didn't seem to fix the problem of the KML not rendering. I've also implemented zoom functionality with Googles Visualization API and geoxml3 processor. Here are some interesting things that happen with my application:
One of the KML files that does not draw will actually respond to the
zoom functionality by zooming to its extent but still won't draw the
polygon; evidence that the KML is being parsed but not drawing.
One of the KML files that does not draw will eventually draw if I
click on the polygon next to it and am zoomed in close enough. It
will not initially.
I have two KML files that draw when zoomed out but 'disappear' when
I zoom in.
My application is here and my fusion table is here. If anyone has had similar problems and was able to fix them I would really appreciate to know how it was accomplished because I'm stumped at this point.
Thanks
first of all: Fusion Tables are still experimental
some issues:
South Nelson Elementary is missing in varID
JV Humphries Secondary Polygons needs to be fixed
I thought I would post an update.
It turns out some of my data did have geometry errors; those were fixed and converted to KML.
The problem is my actual coding. The code was orginally written to simply display polygons from an array and to be turned on/off via a checkbox. The reason for this was to be able to view adjacent boundaries of the other polygons. I achieved this in my initial coding and the user had to zoom into the area of interest via Google's map functionality.
Then I was asked to have a zoom function when the checkbox was clicked to have the application zoom to the polygon in question. This of course works but it depends on which order the checkboxes are clicked on. I'm fairly certain it has to do with how the empty array is populated as checkboxes are clicked on/off.
I don't fully understand the logic of how the code decides which polygon to zoom or not zoom to. All I know is that if all checkboxes are unchecked then each checkbox is checked on/off one at a time the zoom functionality works.
If anyone has a suggestion on how to have each checkbox act 'independently' to zoom regardless of order clicked I would appreciate it.

How to check if two pictures "touching" each others?

I'm writing a game in wich the user is having a spaceship and need to "kill" some enemeis that wiil try to kill him back.
I have a "Texture 2d" for the user's spaceship picture,a bullet picture and an enemy picture.
I would like to know,after the user has shoot the bullet to the enemy,how can I check that the bullet has hurts the enemy?
In other words - what function checks that one picture is "covering" (even partial) another one?
Thnx!
:-)
Please have a look into the topic "2D Collision Detection". As you are using XNA the following site should give you a good start:
http://www.progware.org/Blog/post/XNA-2D-Basic-Collision-Detection.aspx
Basically you need to detect when two non-transparent pixels are overlapping, but to prevent unnecessary calculations, you first check if the bounding box for your ship and the enemy ships is even overlapping (since the pixels won't overlap if the bounding boxes don't).
Riemers.net has a good tutorial. Here's a good sample project on per-pixel collision detection from the app hub.
I'm unaware of any pre-existing API functions that do this, but implementing it yourself will be a good exercise.
You should know the x/y coordinates of each of your picture's origins. You should also know the dimensions of each picture.
You can calculate the bounding box of a picture, and whether there exist any points in common.

Newb: WPF custom graphic control - where to start

Apologies if there is a thread for this already, I couldn't find one that I could get my teeth into.
Anyway, I'm new to WPF and want to create a custom control that will be a sort of graphic control. The graphic will always consist of a circle, containing a matrix of several squares (from several hundred to several thousand actually) The squares need to respond to mouse click and mouse over events (and ideally be possible to navigate/select via keyboard.) Each square will represent an object I've coded.
In the past I've used a grid control to display the coloured squares (with VCL in CBuilder) but I would like to make a graphical version. (Actually, another question I'd like to ask is, is there a WPF grid control where I can set the colours of individual cells?)
The question is, where to start? Do I start with a canvas and draw on it? Do I derive from an existing object? I'm just a little lacking on ideas on implementation so any pointers or advice you can offer will be greatly received.
BBz
First off I would suggest getting a decent handle on WPF and how it approaches the problem set. It is vastly different from previous .NET Desktop technologies such as WinForms. Once you have a decent understanding in regards to the separation of logic from UI and how WPF approaches the problem then you can dive in and begin making the right decisions based upon what you encounter.
The problem you mention can be solved in multiple ways. In regards to your question about making use of a Grid, that could be done as that is a layout type. It is vastly superior to the Canvas in terms of arranging your visual structure. The defined rows/columns are nothing more then containers which can hold varying UI objects. Therefore pushing a Rectangle into the Grid and coloring as desired would give you the effect you are looking for. This Rectangle could then become a custom control which would allow you to define varying properties on, as well as specific triggers for mouse overs, etc...
At a higher level you will want to encapsulate this logic as a UserControl which will also hold your custom control. Perhaps the UserControl contains the Grid which will make use of your custom control.
Hopefully this gives you some ideas around how to get started, however getting a better understanding of WPF will help you immensely in achieving your goal.

Resources