I would like to make a mobile application so that when a user points at a building it will render various information. My problem is that I really don't know if this can be done. I mean the only way is to take an image of the building and upload it as an image target in unity. But what if the image will change over time (vegetation?) or the user points the camera from a different perspective than the one I used?
Is there a way to make this so that the problem mentioned above won't be an issue?
You can approach this in several ways (or even combine them together):
Write a mechanism that will be able to download updated Vuforia datasets from a server into your app in runtime. This way, you can update the building images when you wish, and ensure it will be detected if something changes
Make sure you take enough pictures of the building from enough perspectives. You can use many images of the building in a single Vuforia dataset
Try to find a partial sections or sections of the building that is very likely to remain unchanged.
Related
I'm trying to create an app that can play two different media files, say Ambience.mp3 and fgAudio.mp3. I've created two different MediaPlayers, say ambienceMP and fgAudioMP.
Playing both the files at the same time is not an issue. I want to add two sliders that can control the volumes of each of the files separately. However, setVolume() doesn't work in this case and the other solutions that I found recommend different methods which seem overly complicated for this simple task (programming in a nutshell).
Is there a simple way to fix this, or do I need to import some library? I would highly appreciate some simple code that uses hard-coded volumes for each of the files, I can extend it to sliders myself.
I'm not sure if this is relevant, but I'm using Kotlin and not Java.
My intentions:
Actually, I intend to:
implement vehicles as containers
simulate/move these containers on the .osm maps-based roads
My viewpoint about the problem:
I have loaded the XML-based .osm file and processed it in python using xml.dom. But I am not satisfied with the performance of loading the .osm file because later on, I will have to add/create more vehicles as containers that will be simulated onto the same road.
Suggestions needed:
This is my first time to solve a problem related to maps. In fact, I need suggestions on how to proceed by keeping in mind, the performance/efficiency, with this set of requirements. Suggestions in terms of implementation will be much appreciated. Thanks in advance!
Simulating lots of vehicles by running lots of docker containers in parallel might work I suppose. Maybeyou're initialising the same image with different start locations etc passed in as ENV vars? As a practical way of doing agent simulations this sounds a bit over-engineered to me, but as an interesting docker experiment it might make sense.
Maybe you'll need a central thing for holding and sharing the state (positions of other vehicles) and serving that back to the multiple agents.
The challenge of loading an .osm file into some sort of database or internal map representation doesn't seem like the hardest part, and because it may be done once on initialisation and imagine it's not the most performance critical part of this.
I'm thinking you'll probably want to do "routing" through the road network (taking account of one ways etc?), giving your agents a purposeful path to follow to a destination. This will get more complicated if you want to model interactions with other agents e.g. you might want to model getting stuck in traffic because other agents are going the same way, and even decisions to re-route because of traffic, so you may want quite a flexible routing system, perhaps self-coded.
But there's lots of open source routing systems which work with OSM data, to at least draw inspiration from. See this list: https://wiki.openstreetmap.org/wiki/Routing#Developers
Popular choices like OSRM are designed to scale up to country size or even global openstreetmap data, but I imagine that's overkill for you (you're probably looking at simulating within a city road network?). Even so. Probably easy enough to get working in a docker container.
Or you might find something lightweight like the code of the JOSM routing plugin easier to embed in your docker image and customize (although I see that's using a library called "JGraphT")
Then working backwards from a calculated route you can calculate interpolated steps along that path which will allow you to make your simulated agents take a step on each iteration (simulated movement)
I have set up the color scheme to identify the different system of the building services in Revit and Navisworks. When I uploaded to the forge viewer, the colors were shown correctly at the beginning. However, when I zoomed in, some of the colors were disappeared. Did anyone have this problem? How could the problem be solved?
Thank you.
Forge Display Error:
Zooming Lo01
Apologizing for any inconvenience caused, this might be a known issue of our model extraction service for the Revit, it has been logged as REVIT-120524 in our case system to let our dev team to allocate time to investigate it. You can send this id to forge.help#autodesk.com to inquire updates in the future.
BTW, the reason caused this issue I discovered is there are many MEP system type, and they have owned different colored material, fittings will take materail color from the first system type from corrosponding MEP systems. Currently, there is not formal solution to avoid this. We apology for this again. Fortunately, there is a workaroud you could try:
Split your MEP models into servral RVT files that contain single pipe system, duct system and others.
Upload them to the Forge for translation seperately.
Load translated models via the Forge Viewer.
Hope it helps.
This workaround is working on my live projects now, but might not suite for you. And it's not the formal solution, you might have to use it at your own risk.
I need to use Vuforia, to implement AR in an android app using Android Studio.
I was able to run the samples separately with no issues. My doubt is if any one knows how to use the video playback and image target samples at the same time when the camera is active?
For example, I have two images in my database located on assets. When the first image is recognized, I need to play a video (video playback) and when the second image is recognized, another image is placed with AR above the target (image target).
I know this is a bit late, but perhaps it could be of an assistance nevertheless. I cannot give you any code, but I can tell you for sure that there is no real problem of doing this - in fact, it is only a matter of a correct integration between two of Vuforia's samples. Once you have implemented the functions for drawing an image on a target and for drawing a video on a target, you simply invoke the relevant function based on the target id. Looking at a specific difficulty and trying to help on that would be a lot easier, once you actually did the integration.
I need to work with the 3d model of some places. Google Earth has the 3d building layer with "Gray Buildings" in it. This would be exactly what I would require. Is there any way to get the 3d models that are used? Is there a Google Earth API (other than the Javascript stuff)? (I'm working in .net) that would help?
Or is there at least a manual solution how I can get these models, say, into Sketchup?
Thanks a lot!
While there still isn't support for getting building geometry from Google's APIs, OpenStreetMaps does expose some data you can use. Check out this guide here:
http://wiki.flightgear.org/OpenStreetMap_buildings
Making a request like
http://overpass-api.de/api/xapi?way[bbox=-74.02037,40.69704,-73.96922,40.73971][building=*][#meta]
Will return an XML with building's base outlines and (in some cases) heights. You can use this info to extrude some very simple buildings: http://i.imgur.com/ayNPB.png
To fill in the missing height values (and they're missing on most buildings), I try to use the area of the building's footprint to determine how tall it might be compared to nearby buildings. Unfortunately, until Google is able to make their models public, this will have to do.
There is currently no way to download models from within Google Earth. Also, even is there was - extracting data is against the TOS. Many of the models come from government or private sources so there are issues with licencing the data as a whole. It is worth noting however that a lot of the models in Google Earth are located on the Sketch up 3dwarehouse so maybe you could get that data you want from there?
Also, to work with the javascript api from managed code you might want to check this control library I have put together. Whilst the controls themselves may not be applicable, the ideas behind them should get you under way. http://code.google.com/p/winforms-geplugin-control-library/ essentially there are a series of wrappers and helpers that let you seamlessly integrate the plugin into a winforms application.
You can also read more about Cities in 3d (the name of the project that developed the low-res building layer) here: http://sketchup.google.com/3dwh/citiesin3d/