which sensor i have to use to find out distance of a particular device using sensors in iot - sensors

I want to create a device like tracker to find objects like keys or any important things. I want to add that sensor to that valuable thing to find where the object i left. I can't use motion sensor, ultrasonic sensor or air proximity because of its check distance form one direction. I need to find out the distance of the object from any direction.

Consider using one of both:
GPS
RDF (Radio Direction Find)
The first is great if you have open sky and client (the looking device) is able to navigate through GPS (thinking in a smartphone).
The second is good for indoor but it can be hard to program and find parts. Look to soloshot. It follows a beacon attached to person of interest. I don´t have their spec but this is a kind of RDF I bet. Airplanes use a avionic based on RDF idea. Read the wikpeadia article on RDF.
Others may come up with other ideas, those where the first that popped in my mind.

Related

Tracking using Lucas Kanade Optical Flow, shows weird behavior, points are jumping

My goal is to implement a method, that tracks persons in a single camera. For that, I'm using Scaled Yolov4 to detect persons in the scene, then I generate points inside of their bounding boxes using cv2.goodFeaturesToTrack, and track them using Lucas-Kanade Optical Flow cv2.calcOpticalFlowPyrLK.
the problem is, sometimes the points make huge jumps, and I can't tell why. The following video shows the problem I'm facing, specifically, on second 0:02, the green dots jumps in a weird manner which makes my method detects that person as a new person.
https://www.veed.io/view/37f98715-40c5-4c07-aa97-8c2242d7806c?sharingWidget=true
my question is, is it a limitation on LK optical flow, or I'm doing something wrong? And is there a recommended Optical Flow method for tracking, or an example implementation for Single Camera Multi Person Tracking using Optical Flow? because I couldn't find much literature or codes about it.

How to implement AoA/currentLocation with Bluetooth?

I have developed an app where the executing device acts as a beacon. As a scan result, different BLE devices appear with their respective RSSI value.
Is it possible to use these RSSI values to determine the own current position with AoA? I have already read some documentation on this, but I have not found anything like an algorithm.
How can AoA be implemented?
Can someone tell me the necessary steps to do this?
Is triangulation also possible if the referenced 3 points are moving or do you only search for the 3 nearest points??

Detecting Handedness from Device Use

Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId

Dectect motion with RFID without use of sensors

Does anyone know if the use of only one static active RFID tag are able to detect motion(eg. moving human or objects) by itself without any use of other extra tags or sensors?
You could be able to do it by doing a permanent inventory and getting both the time and the signal strength received from the tag in each session. Both will be an indication that either orientation or distance has changed, but they are not exclusivelly corelated (you could have a change of both factors even if the tag is not moving) so you should do extensive test before settling on the solution.
Since you are talking about an active tag, there are manufacturers that incorporate motion sensors into their tag in order to save battery (tag emits more often when its moving), so you should contact them to see which of them can allow you to gather data from the sensor.
If you are thinking about fixing the tag on a wall and have the tag detect when someone passes by it, I do not know that such a product exist: there are tags that have thermometers or even humidity sensors integrated but not area of interest motion detection, for this you can use a wireless motion sensor.

RFID Limitations

my graduate project is about Smart Attendance System for University using RFID.
What if one student have multiple cards (cheating) and he want to attend his friend as well? The situation here my system will not understand the human adulteration and it will attend the detected RFID Tags by the reader and the result is it will attend both students and it will store them in the database.
I am facing this problem from begging and it is a huge glitch in my system.
I need a solution or any idea for this problem and it can be implemented in the code or in the real live to identify the humans.
There are a few ways you could do this depending upon your dedication, the exact tech available to you, and the consistency of the environment you are working with. Here are the first two that come to mind:
1) Create a grid of reader antennae on the ceiling of your room and use signal response times to the three nearest readers to get a decent level of confidence as to where the student tag is. If two tags register as being too close, display the associated names for the professor to call out and confirm presence. This solution will be highly dependent upon the precision of your equipment and stability of temperature/humidity in the room (and possibly other things like liquid and metal presence).
2) Similar to the first solution, but a little different. Some readers and tags (Impinj R2000 and Indy Readers, Impinj Monza 5+ for sure, maybe others aswell) have the ability to report a response time and a phase angle associated with the signal received from an interrogated tag. Using a set up similar to the first, you can get a much higher level of reliability and precision if you use this method.
Your software could randomly pick a few names of attending people, so that the professor can ask them to identify themselves. This will not eliminate the possibility of cheating, but increase the risk of beeing caught.
Other idea: count the number of attendiees (either by the prof or by camera + SW) and compare that to the number of RfID tags visible.
There is no solution for this RFID limitation.
But if you could then you can use Biometric(fingerprint) recognition facility with RFID card. With this in your system you have to:
Integrate biometric scanner with your RFID reader
Store biometric data in your card
and while making attendance :
Read UID
Scan biometric by student
Match scanned biometric with your stored biometric(in the card :
step 2)
Make attendance (present if biometric matched, absent if no match)
Well, We all have that glitch, and you can do nothing about it, but with the help of a camera system, i think it would minimise this glitch.
why use a camera system and not a biometric fingerprint system? lets re-phrase the question, why use RFID if there is biometric fingerprint system ? ;)
what is ideal to use, is an RFID middleware that handle the tag reading.
once the reader detects a tag, the middleware simply call the security camera system and request for a snapshot, and store it in the db. I'm using an RFID middleware called Envoy.

Resources