I'm making a hide-and-seek game app with Android studio. So my application needs to know the location information of each player's device (no coordinates are required, like GPS).
At first, this video (https://www.youtube.com/watch?We tried to create IMUs using acceleration sensors, gyroscopes and geomagnetic sensors like v=6ijArKE8vKU), but after searching and finding out in many ways, we decided to do it in a different way.
So the way I'm thinking now is to collect the current location of each player with the information from ARCore's Concurrent Odometry and Mapping Process.
However, most documents and tutorials seem to start with the object placement, anchor part.
How can I get the current player's location in ARCore or the motion tracking information on my device?
you can use FirstPersonCamera.transform.position to get the current x y z position of your device in current scene
Related
My goal is to implement a method, that tracks persons in a single camera. For that, I'm using Scaled Yolov4 to detect persons in the scene, then I generate points inside of their bounding boxes using cv2.goodFeaturesToTrack, and track them using Lucas-Kanade Optical Flow cv2.calcOpticalFlowPyrLK.
the problem is, sometimes the points make huge jumps, and I can't tell why. The following video shows the problem I'm facing, specifically, on second 0:02, the green dots jumps in a weird manner which makes my method detects that person as a new person.
https://www.veed.io/view/37f98715-40c5-4c07-aa97-8c2242d7806c?sharingWidget=true
my question is, is it a limitation on LK optical flow, or I'm doing something wrong? And is there a recommended Optical Flow method for tracking, or an example implementation for Single Camera Multi Person Tracking using Optical Flow? because I couldn't find much literature or codes about it.
I want to create a device like tracker to find objects like keys or any important things. I want to add that sensor to that valuable thing to find where the object i left. I can't use motion sensor, ultrasonic sensor or air proximity because of its check distance form one direction. I need to find out the distance of the object from any direction.
Consider using one of both:
GPS
RDF (Radio Direction Find)
The first is great if you have open sky and client (the looking device) is able to navigate through GPS (thinking in a smartphone).
The second is good for indoor but it can be hard to program and find parts. Look to soloshot. It follows a beacon attached to person of interest. I donĀ“t have their spec but this is a kind of RDF I bet. Airplanes use a avionic based on RDF idea. Read the wikpeadia article on RDF.
Others may come up with other ideas, those where the first that popped in my mind.
Does anyone know if the use of only one static active RFID tag are able to detect motion(eg. moving human or objects) by itself without any use of other extra tags or sensors?
You could be able to do it by doing a permanent inventory and getting both the time and the signal strength received from the tag in each session. Both will be an indication that either orientation or distance has changed, but they are not exclusivelly corelated (you could have a change of both factors even if the tag is not moving) so you should do extensive test before settling on the solution.
Since you are talking about an active tag, there are manufacturers that incorporate motion sensors into their tag in order to save battery (tag emits more often when its moving), so you should contact them to see which of them can allow you to gather data from the sensor.
If you are thinking about fixing the tag on a wall and have the tag detect when someone passes by it, I do not know that such a product exist: there are tags that have thermometers or even humidity sensors integrated but not area of interest motion detection, for this you can use a wireless motion sensor.
I am making a little game using node.js for the server and a .js file embedded in a HTML5 canvas for clients. The players each have and object they can move around with the arrow keys.
Now I have made 2 different ways of updating the game, one was sending the new position of the player everytime it changes. It worked but my server had to process around 60 x/y pairs a second(the update rate of the client is 30/sec and there were 2 players moving non-stop).
The second method was to only send new position and speed/direction of the player's object when they change their direction speed, so basically on the other clients the movement of the player was interpolated using the direction/speed from the last update. My server only had to process very few x/y7speed/direction packets, however my clients experienced a little lag when the packets arrived since the interpolated position was often a little bit away from the actual position written in the packet.
Now my questions is: Which method would you recommend? And how should I make my lag compensation for either method?
If you have low latency, interpolate from the position in which the object is drawn up the new position. In low latency it does not represent much of a difference.
If you have high latency, you can implement a kind of EPIC.
http://www.mindcontrol.org/~hplus/epic/
You can also check how it is done in Browser-Quest.
https://github.com/mozilla/BrowserQuest
Good luck!
When using the PhotoCamera one must create an instance of the PhotoCamera as well as a VideoBrush - and then assign that PhotoCamera instance to the source of the VideoBrush instance before the camera can be initialized. example:
PhotoCamera camera;
VideoBrush brush;
camera = new PhotoCamera();
camera.Initialized += CameraInitialized;
brush = new VideoBrush();
brush.SetSource(camera);
The VideoBrush is clearly useful in scenarios where the developer wishes to create a viewfinder for the camera video stream by associating the VideoBrush instance with the brush of a visual object like a Canvas.Background or Rectangle.Fill. However, when that is not the case, requiring the developer to still go through the motions of creating a VideoBrush seems somewhat random at first glance.
So two questions, why does the PhotoCamera always need to be associated with the VideoBrush?
What is the performance impact associating with attaching the PhotoCamera to a VideoBrush? Specifically how are calls to GetPreviewBuffer(Argb|Y|YCbCr) impacted by the associated VideoBrush?
Thanks!
PS. hopefully this doesn't come off as pointed in anyway, I'd just like to have a better understanding of why this requirement exists - and how it impacts performance.
PPS. the improvements in the WP7 SDK for Mango are amazing - I'm looking forward to seeing what people come up with now that access to the sensors have been opened up.
In mango you simply have two options, either do as you suggested above to use a frame in your app (a Video Frame) to take pictures, essentially grabbing a single frame from the video brush.
Or you can use the old NoDo method of using the PhotoChooser Task, which will launch the framework camera app separately and return an image.
Obviously pro's and cons of both methods so just choose the one that suit you.