Please is it possible to localize Hololens in specific map between application runs without using anchors? I would like to use coordinate system provided by SpatialStationaryFrameOfReference as it makes more sense for my use-case than anchors. However this would obviously result in different origin for each application run. Therefore I am looking for a way to get transform from SpatialStationaryFrameOfReference -> to absolute position in current map.
For example 3D view in device portal always shows Hololens in correct relation to current map - even when no anchors are placed in it. Therefore I thought it should be possible to do this also from application. So far I have only thought of placing some random anchor that would be then used to get a transform from the anchor’s coordinate system to that of the stationary reference frame (using TryGetTransformTo). I guess this should technically work, but I there might be also a better way, right? Thanks.
Another way to set the application's origin is to detect the QR code in the environment and establish a coordinate system. For more information please see:QR code tracking
Related
I want to change the height of all walls but the length of walls only in a particular axis, for instance, along the x-axis.
Consecutively, could you also tell how I could alter the similar dimensions for a house? Where there are connected walls?
I see nothing in this code that means it does not work.
However, it seems to me that it does not make much sense.
One would seldom constrain all wall heights to be user defined to a certain value; instead, in most Revit models, walls are constrained to reach from a bottom level to a top level. Then, if the height of all walls needs to be modified, you would modify the elevation of the top level only.
The logic of the code guarantees that the wall location line will only be modified if the newWallLine equals XYZ.BasisX. This may never be the case, since the line is a Line object and the vector an XYZ.
I would recommend researching exactly what you wish you achieve and how to do so manually in the end user interface before addressing the task programmatically.
In general, if a feature is not available in the Revit product manually through the user interface, then the Revit API will not provide it either.
You should therefore research the optimal workflow and best practices to address your task at hand manually through the user interface first.
To do so, please discuss and analyse it with an experienced application engineer, product usage expert, or product support.
Once you have got that part sorted out, it is time to step up into the programming environment.
I hope this clarifies.
I'm trying to use UTTeleporterCustomMeshes in UDK in a way that is similar to the behavior in the game Portal (the actual portal itself, not the portal gun).
That is:
You can shoot through them as well as walk through them (DONE)
It appears as if you can see through the portal as if it were a window (DONE)
You can stand half way through a portal and be at both sides simultaneously
Shadows and light will also carry through the teleporter
My problem is that I can't seem to find any detailed documentation for UTTeleporters.
Do the teleporters have some built in functionality for this kind of stuff or do I need to implement it all myself?
Ie, create a custom camera to capture the scene to be rendered onto my teleporter and create my own custom teleporter script?
I'm just after SOME sort of direction or ideas ast to how I could achieve this.
Cheers
EDIT:
I've since managed to get steps 1 and 2 working (UTPortal did the trick), though 3 and 4 still remain a challenge.
My first mistake was trying to use UTTeleporterCustomMeshes.
No matter where I went through, it would always spit me out at the middle of the target teleporter. Plus I'd have to do something like #Sebastian said with the TextureRenderTarget.
I still have a long way to go before it starts to look seamless!!
Then, onto stage 2!
Unreal does as far as I know does not support that type of behavior.
You can however build it yourself.
I would look at the TextureRenderTarget classes for rendering the material on the portal.
Standing in the portal with one foot on each "side" will be... very tricky to do but shooting and walking through them should be simple enough by extending the touch event in actor.
Just wondering if anyone can point me to a good web framework for displaying large-scaled network
Need the ability to display only a small portion of the network, but allowing the possibility to drill down on certain node/portion of the network interactively.
Optionally the ability to allow interactive editing of the network/graph; e.g., connecting nodes or breaking edges.
The simpler the better!
There's our library, mxGraph. If you want open source you could try JIT or D3.
I had similar requirements and I tested about four libraries including d3 and infoVis/JIT.
I was using force-directed layout in both d3 and infoVis.
Both of them are quite close but I ended up choosing infoVis/JIT because I had some problems in d3, solutions of which were not easy.
1: When you have a graph with many nodes in d3, the graph will keep moving/animating for quite longer time. I found that it was because d3 graph animates per tick. I found some solutions here and in forums but it was not easy to solve this problem and they did not work for me.
2: Once the graph is rendered, if you try and drag a node, the whole graph would move and animate itself. Whereas my requirement was to be able to drag and position individual nodes independently, keeping the graph as it is, so that user can re-arrange nodes if he/she wants to. I could not find any simple solution for this one in d3.
Both of these problems were solved in infoVis/JIT.
#"Need the ability to display only a small portion of the network, but allowing the possibility to drill down on certain node/portion of the network interactively."
I have implemented this functionality using infoVis.
Firstly,I use google static map API to get the image to display on html/wml.
And then, I want to get the point's GPS position where user pressed on the image.
Is there some way to get the GPS position if I got the co-ordinates on the image?
The short answer is probably not. You can't be sure exactly what the static map's dimensions are (the server might change the location slightly to fit things better, etc.). If you're just asking for a map by center and zoom then you stand a slightly better chance, but it will still be tricky.
If you're trying to add dynamic behaviour to a static map, have you considered instead the Maps JavaScript API? Finding the coordinates of where a user clicks is trivial there. (Also, you can disable the zooming, panning, controls, etc. if you want so that it still feels like it's static).
I was wondering how does Nike website make the change you can see when selecting a color or a sole. At first I thought they were only using images and when the user picked a color you just replaced that part, but when I selected a different sole I noticed it didn't changed like an image it looked a bit more as if it was being rendered. Does anybody happens to know how this is made? Or where can I get further info about making this effect :)?
It's hard to know for sure, but my guess would be that they're using a rendering service similar to that provided by Adobe's Scene7.
It's a product that is used to colorize/customize a base product image based on user choices.
If you're interested in using the service, I'd suggest signing up for their weekly webinar. I attended one a while back and was very impressed with their offering. They showed the Converse site (which had functionality almost identical functionality to the Nike site) as a demo.
A lot of these tools are built out in Flash using a variety of techniques:
1) You can use Flash's BitmapData object to directly shift the hues of the pixels in your item. This is probably the simplest technique but often limits you to simple color transformations.
2) You can pre-render transparent PNG's (or photos, I guess) containing the various textures you would want to show on your object (for instance patterns or textures) and have them dynamically added to your stage at runtime. This, I think, offers the highest fidelity but means you need all of your items rendered upfront.
3) You can create 3D collada files and load them via a library like Papervision3D. Then dynamically change the texture at runtime. This is the most memory intensive technique and tends to result in far worse fidelity, but for that you get a full 3D object that you can view in space.
I'm sure there are other techniques but those are the top 3 I can think of. I hope that helps!