I asked this some time ago on GitHub and was asked to move the question here, so I will.
From my experience, starting a game with a Hololens makes the camera start at 0,0,0 in the scene. When starting with an HMD, the head has approximately the correct height, which can also be adjusted in the Mixed Reality Portal if not perfect.
If those two were to meet in a networked environment, one would see the other at his feet or high up in the air when viewed the other way around.
To get those two to meet at eye level, you either raise one up or lower the other down. No matter the case, you need to know by how much.
The Hololens does not have an internal height representation, you could calculate it from the generated spatial mesh at best. The HMD on the other hand does have an information about it's height, a base height even, otherwise I couldn't configure that in the Portal, kneel down etc. and just be the correct height above the floor.
Now the question is, how do I read this base height for the HMD so I can lower the floor to that height, effectively setting the networked parties to eye level?
For now I have to set an arbitrary height of like 1.6 meters, but that's my colleagues standing height. I am about 1.93 meters tall
NeerajW on GitHub wanted to see if he could find an API that returns the Portal default height, but never replied.
With the Hololens 2 joining the community that's now two AR devices that might want to meet VR avatars from around the world.
How do you guys do this?
In your scenario, are you trying to 1-to-1 match the actual user heights? If so, you may wish to place the avatars at eye level (if they are small/hovering objects) or on the floor plane (if they are humanoid).
Since the VR user's cannot actually see the HoloLens user, this may work well. For the HoloLens user it may feel odd, presuming the VR user is visible (in the same room).
In the Holograms 250 academy course (please note this uses the legacy HoloToolkit), the app represents HoloLens users as floating clouds in VR. The VR user was represented as a small figure on an island model to the HoloLens user.
I hope this is helpful.
David
(I also have an inquiry with other members of the team to see if there are other ideas).
Related
I want to change the height of all walls but the length of walls only in a particular axis, for instance, along the x-axis.
Consecutively, could you also tell how I could alter the similar dimensions for a house? Where there are connected walls?
I see nothing in this code that means it does not work.
However, it seems to me that it does not make much sense.
One would seldom constrain all wall heights to be user defined to a certain value; instead, in most Revit models, walls are constrained to reach from a bottom level to a top level. Then, if the height of all walls needs to be modified, you would modify the elevation of the top level only.
The logic of the code guarantees that the wall location line will only be modified if the newWallLine equals XYZ.BasisX. This may never be the case, since the line is a Line object and the vector an XYZ.
I would recommend researching exactly what you wish you achieve and how to do so manually in the end user interface before addressing the task programmatically.
In general, if a feature is not available in the Revit product manually through the user interface, then the Revit API will not provide it either.
You should therefore research the optimal workflow and best practices to address your task at hand manually through the user interface first.
To do so, please discuss and analyse it with an experienced application engineer, product usage expert, or product support.
Once you have got that part sorted out, it is time to step up into the programming environment.
I hope this clarifies.
I am trying to build a system that on providing an image of a car can assess the damage percentage of it and also find out which parts are damaged in the car.
Is there any possible way to do this using Python and open-cv or tensorflow ?
The GitHub repositories I found that were relevant to my work are these
https://github.com/VakhoQ/damage-car-detector/tree/master/DamageCarDetector
https://github.com/neokt/car-damage-detective
But what they provide is a qualitative output( like they say the car damage is high or low), I wanted to print out a quantitative output( percentage of damage ) along with the individual part names which are damaged
Is this possible ?
If so please help me out.
Thank you.
To extend the good answers given by #yves-daoust: It is not a trivial task and you should not try to do it at once with one single approach.
You should question yourself how a human with a comparable task, i.e. say an expert who reviews these cars after a leasing contract, proceeds with this. Then you have to formulate requirements and also restrictions for your system.
For instance, an expert first checks for any visual occurences and rates these, then they may check technical issues which may well be hidden from optical sensors (i.e. if the car is drivable, driving a round and estimate if the engine is running smoothly, the steering geometry is aligned (i.e. if the car manages to stay in line), if there are any minor vibrations which should not be there and so on) and they may also apply force (trying to manually shake the wheels to check if the bearings are ok).
If you define your measurement system as restricted to just a normal camera sensor, you are somewhat limited within to what extend your system is able to deliver.
If you just want to spot cosmetic damages, i.e. classification of scratches in paint and rims, I'd say a state of the art machine vision application should be able to help you to some extent:
First you'd need to detect the scratches. Bear in mind that visibility of scratches, especially in the field with changing conditions (sunlight) may be a very hard to impossible task for a cheap sensor. I.e. to cope with reflections a system might need to make use of polarizing filters, special effect paints may interfere with your optical system in a way you are not able to spot anything.
Secondly, after you detect the position and dimension of these scratches in the camera coordinates, you need to transform them into real world coordinates for getting to know the real dimensions of these scratches. It would also be of great use to know the exact location of the scratch on the car (which would require a digital twin of the car - which is not to be trivially done anymore).
After determining the extent of the scratch and its position on the car, you need to apply a cost model. Because some car parts are easily fixable, say a scratch in the bumper, just respray the bumper, but scratch in the C-Pillar easily is a repaint for the whole back quarter if it should not be noticeable anymore.
Same goes with bigger scratches / cracks: The optical detection model needs to be able to distinguish between scratches and cracks (which is very hard to do, just by looking at it) and then the cost model can infer the cost i.e. if a bumper needs just respray or needs complete replacement (because it is cracked and not just scratched). This cost model may seem to be easy but bear in mind this needs to be adopted to every car you "scan". Because one cheap damage for the one car body might be a very hard to fix damage for a different car body. I'd say this might even be harder than to spot the inital scratches because you'd need to obtain the construction plans/repair part lists (the repair handbooks / repair part lists are mostly accessible if you are a registered mechanic but they might cost licensing fees) of any vehicle you want to quote.
You see, this is a very complex problem which is composed of multiple hard sub-problems. The easiest or probably the best way to do this would be to do a bottom up approach, i.e. starting with a simple "scratch detector" which just spots scratches in paint. Then go from there and you easily see what is possible and what is not
Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId
I'm writing a game in wich the user is having a spaceship and need to "kill" some enemeis that wiil try to kill him back.
I have a "Texture 2d" for the user's spaceship picture,a bullet picture and an enemy picture.
I would like to know,after the user has shoot the bullet to the enemy,how can I check that the bullet has hurts the enemy?
In other words - what function checks that one picture is "covering" (even partial) another one?
Thnx!
:-)
Please have a look into the topic "2D Collision Detection". As you are using XNA the following site should give you a good start:
http://www.progware.org/Blog/post/XNA-2D-Basic-Collision-Detection.aspx
Basically you need to detect when two non-transparent pixels are overlapping, but to prevent unnecessary calculations, you first check if the bounding box for your ship and the enemy ships is even overlapping (since the pixels won't overlap if the bounding boxes don't).
Riemers.net has a good tutorial. Here's a good sample project on per-pixel collision detection from the app hub.
I'm unaware of any pre-existing API functions that do this, but implementing it yourself will be a good exercise.
You should know the x/y coordinates of each of your picture's origins. You should also know the dimensions of each picture.
You can calculate the bounding box of a picture, and whether there exist any points in common.
I work for a ticketing agency and we print out tickets on our own ticket printer. I have been straight coding the ticket designs and storing the templates in a database. If we need a new field adding to a ticket I manually add it and use the arcane co-ordinate system to estimate where the fields should go and how much the other fields need to move by to accomodate new info.
We always planned to make this system automate with a simple (I stress the word simple) graphical editor. Basically we don't forsee tickets changing radically in shape any time soon, we have one size of ticket and the ticket printer firmware is super simple because it's more of an industrial machine, it has about 10 fonts and some really basic sizing interactions.
I need to make this editor display a rectangle of the dimensions by pixel of the tickets (can even be actual size) and have a resizable grid which can toggle between superimposition and invisibility on top of the ticket rectangle and represented by dots rather than lines.
Then I want to be able to represent fields by drawing rectangles filled with the letter "x" that show the maximum size of the field (to prevent overlaps). These fields should be selectable, draggable and droppable in a snap to grid fashion.
I've worked out the maths of it but I have no idea how to draw rectangles and then draw grids in layers and then put further rectangles full of 'x'es on top of those. I also don't really know much about changing drawn positions in accordance with mouse events. It's simply not something I've ever had to do.
All the tutorials I've seen so far presume that you already know a lot about using the draw objects and are seeking to extend a basic knowledge of these things. I just need pointing in the direction of a good tutorial in manipulating floating objects in a picturebox in the first place.
Any ideas?
For those of you in need of a guide to this unusual (at least those of us with a BIS background) field I would heartily endorse:
https://web.archive.org/web/20141230145656/http://bobpowell.net/faqmain.aspx
I am now happily drawing graphical interfaces and getting them to respond to control inputs with not too much hassle.