How can I position some objects in my room using WebXR? - web

I would like to use WebXR to build an application that displays some objects in the room in some predefined positions. Hence, by using the camera on my phone, I would be able to capture the real world surrounding me, with the superposition of computer-generated content. Basically augmented reality, but with the objects positioned according to the real world room and not to the x-y-z axes of the camera. To start simple, I wanted to build a model of my room and place the objects there, making them visible when using the app. Unfortunately, I can't find anything useful online. Does anyone know a solution to this or have some references to suggest me?
If you are very familiar with WebXR, please let me know because this technology is very new to me (and apparently to the world) and I would like to ask some questions :)
Thank you!

Related

How do I go about building this application?

So I have this application laid out but I am unsure on some aspects of how to go about it.
It's a website for dogs with cancer to supply a raw ketogenic diet for the user. There are other aspects of the website that I have figured out like where to hold state and such.
Where I'm a little confused on how to proceed is on the calculations for the ketogenic page shopping cart. The user can choose 1 fat source from the sources provided, one protein source and one green vegetable source. I want to make this as balanced and complete for the user for their dog- so obviously 1-protein 1 veg and 1 fat source is not balanced and complete. I need to factor in the amount of calories need per weight which I will have an input where the user can enter their dogs weight. I have done some of the math to make it complete and balanced on paper, factoring in amino acids, vitamins and minerals and the omega 6:3 ratio.
What I'm confused about is where am I going to hold all this data per se? It's a lot of data and it's based on many factors such as weight-activity, keto ratio like 1:1 or 2:1 depending what the user selects.
I obviously need a backend and need to build an API, but how would I return to the user the complete and balanced diet when so many other factors play in? Where would I store this other data? In objects? Variables? And then put it on the backend? I would greatly appreciate any help. Thanks in advance.
how are you? First of all, congrats on the great idea, I think it is great.
For starters, I think it's important for you, to lay down what technologies you'll be using.
1- Choose the technologies you are going to use. Will you use the MERN stack? React for front-end, mongo for the database, node.js for back-end with express.js as a framework?
2- Will you be the one providing all the data for the dog's diets? If not, find some website with an API for that data.
3- I am not any kind of expert on JS nor React.js, I'm just following the logic here. But does that data change? or is it always the same? I would advise you to use objects, so you can "play" with them, pass them around the classes (I would advise you to create a react class, where you store all of those objects, make sure you export that class)
4- Just lay everything out and START BUILDING!!! It sounds hella overwhelming to start such a big project, but once you get going some i's will be dotted, you will get a better understanding of what you actually are going to build.
5- Few tips, if you use an API you won't need to store the data, else store it on your database (JSON file, for example, there are great youtube tutorials on how to do so.)
Keeping this in mind, you asked where you'll be storing all that data, if you have an api, you won't store anything, if you don't, JSON is your best shot, it's really easy and intuitive to use and easy to read inside a react component.
I hope this gave you some extra clarification on what you'll need to do, in order to start.
PS: This answer is purely based on my limited React (and web dev) experience, I am no expert, let me know if there's anything else I can help you with.

UML Modelling Qustion

I am in the process of developing some Use Cases for a mobile mapping/gps app. Users will be able to use this app similar to google maps. I was wondering if anyone had valuable input into some possible use cases.
Here are some I came up with myself:
1) Get Current Location
2) Set Destination Location
3) Create Fastest Route
4) View Alternative Routes
5) Traffic Estimation on Routes
If someone could help me elaborate or comment on my direction that would be helpful!
My first impulse was to flag your question as "too broad", as you basically ask to help you with your requirements analysis. But I give a few hints.
Your 5 use cases don't look bad. But they appear to be just a first rough sketch of the functionality of your app, that needs to be refined. A good model, be it UML or anything else, must be helpful for its reader to gain some insight. Now these 5 use cases could be named by any child who has seen a navigation device once in her life. To be meaningful, questions like the following should be asked and will probably lead to a more detailed use case analysis:
How are destination locations selected? If there is more than one place called Jacksonville, how will the user be informed, and how will she select the right one? Does selecting the location consist of more than one step, say country - city - road - block, to assist the user?
How do map data get into the application?
What kind of alternative routes are considered and how should they be calculated?
How will traffic data get into the application?
Try to put yourself into the developer's position. Which questions will she need to have clarified to build the right application?

Detecting Handedness from Device Use

Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId

I can't figure out where to start with GIS application development, or which technology to select

I am very new to GIS development, and to be be frank I have no background about it at all. I searched the web but the tutorials I found seemed to assume the reader has some background information.
the thing is that I am confused about what to read or learn, there seems to be lots of technologies, and I feel lost since some speak about openlayers, geoserver, mapserver, google maps, and open street maps.
So here is what I am supposed to develop, and I hove you could give me an advice about which technology to use, and where should I start reading - given that I know almost nothing -.
Case 1: a closed system for about 20 users only, who can specify locations on the map, and the web application will store the latitude and longitude of the locations and show the markers. I wanted to use google maps api, but I cancelled that since there license requires you to purchase the service if the system is a closed one. so what technology should I use in such case? I need a free option, also I will be only using web server, so if the solution includes using my own geoserver, or something like that I won't be able to do it.
Case 2: I am supposed to display the roads and routes between two given points, and probably add some notes on the map. For this I case I can use my own map server/geo server, but again I want your suggestions.
of course the solution need to be open source
finally, I hope you could tell me what to start reading first,
Start by looking over at https://gis.stackexchange.com/, starting with the tags [web-mapping] and
Some topics in particluar you may want to look at are:
https://gis.stackexchange.com/questions/8113/steps-to-start-web-mapping
https://gis.stackexchange.com/questions/8238/where-how-to-learn-about-getting-started-with-web-gis
https://gis.stackexchange.com/questions/13868/looking-for-a-developer-friendly-web-gis
As for skills and tuorials, look at:
https://gis.stackexchange.com/questions/17227/free-gis-workshops-tutorials-and-applied-learning-material
https://gis.stackexchange.com/questions/913/web-gis-development-skill-sets

Getting data from objects that collide

I'm currently designing a game using Cocos2d. There's no code yet, as I'm still developing my ideas. But, I've run across a question I can't answer and want to know if I'm just missing something or what? Here's what I'm currently thinking:
I am "dropping" multiple blocks from the top of the screen and they move down the screen in random directions. They will eventually settle at the bottom of the screen and stack up one on top of the other. Eventually, while falling, some blocks are going to collide with others. When two blocks collide I want to test to see if certain characteristics of each block are equal (e.g. size, color, orientation, etc.). Each block is it's own object, will handle it's own movement and collision detection, and will have accessor methods for size, color, orientation, etc.
Here's my question:
Am I correct in thinking that each block is a separate unit in itself and doesn't know anything about the other blocks? Block A, for instance, collides with Block B and only knows that it collided with something, but doesn't know it was another block? If this is so, then how do I do a proper comparison? How do I tell which block has collided with which block and get access to each block's data and where do I do the comparison? In the layer?
I'd love to be pointed in a decent direction here. I'm not really sure if what I'm wanting to do is even doable? Any suggestions?
You could use a physics engine that usually comes along with cocos2d- either chipmunk or box2d. The physics engines will take care of collisions for you, and if you implement collision callbacks then you can know when two objects hit each other. You can then check the characteristics of each object and react accordingly. This tutorial on Chipmunk and cocos2d integration might be helpful.

Resources