I understand a place (like a business) can set up a geofence through which phones pass. My question is, can a PHONE have its own geofence (always moving, geographically, as the phone moves, with the phone at the center of the geofence area) through which another phone can pass, either because the first phone is moving, the second phone is moving, or because both are moving?
From Webopedia's definition, it appears I am able to answer my own question: "Geo-fencing, or geofencing, is a term that refers to software tools or applications that utilize global positioning systems (GPS) or radio frequency identification (RFID) to establish a virtual perimeter or barrier around a physical geographical area." I'm concluding from this definition that a mobile phone cannot establish it's own "mobile" geofence, through which other phones may pass. If this is incorrect, I'd appreciate any additional insight anyone has.
Related
I have an idea to prevent leaks of confidential data from camera captured photo of the document.
This is the main scenario.
This software will be installed in user's computer.
And when the computer is turned on, the software will be autorun and that computer's device ID and user ID will be encrypted as the format of image for example.
Let's assume that information thief will capture the computer screen with his camera.
I need to retrieve user identity from captured photo.
For the implementation of this idea, I think these following steps are main focus.
Embed user identification data in computer screen as the format of steganography
Decode embedded information from camera-captured photo.
My biggest concern is how to make embedded information(user id, for e.g) be invisible to user.
Second concern is how to decode original information from partially captured photo.
My possible solution to second concern is hologram.
It is known that hologram can be reconstructed from its portion.
But the problem is that hologram will be too visible to users.
It is trade-off, if I increase hologram visibility(maybe transparency) the possibility of its reconstruction will be increased, but its bad to user-friendly.
Any ideas or possible solutions would be appreciated.
I asked this some time ago on GitHub and was asked to move the question here, so I will.
From my experience, starting a game with a Hololens makes the camera start at 0,0,0 in the scene. When starting with an HMD, the head has approximately the correct height, which can also be adjusted in the Mixed Reality Portal if not perfect.
If those two were to meet in a networked environment, one would see the other at his feet or high up in the air when viewed the other way around.
To get those two to meet at eye level, you either raise one up or lower the other down. No matter the case, you need to know by how much.
The Hololens does not have an internal height representation, you could calculate it from the generated spatial mesh at best. The HMD on the other hand does have an information about it's height, a base height even, otherwise I couldn't configure that in the Portal, kneel down etc. and just be the correct height above the floor.
Now the question is, how do I read this base height for the HMD so I can lower the floor to that height, effectively setting the networked parties to eye level?
For now I have to set an arbitrary height of like 1.6 meters, but that's my colleagues standing height. I am about 1.93 meters tall
NeerajW on GitHub wanted to see if he could find an API that returns the Portal default height, but never replied.
With the Hololens 2 joining the community that's now two AR devices that might want to meet VR avatars from around the world.
How do you guys do this?
In your scenario, are you trying to 1-to-1 match the actual user heights? If so, you may wish to place the avatars at eye level (if they are small/hovering objects) or on the floor plane (if they are humanoid).
Since the VR user's cannot actually see the HoloLens user, this may work well. For the HoloLens user it may feel odd, presuming the VR user is visible (in the same room).
In the Holograms 250 academy course (please note this uses the legacy HoloToolkit), the app represents HoloLens users as floating clouds in VR. The VR user was represented as a small figure on an island model to the HoloLens user.
I hope this is helpful.
David
(I also have an inquiry with other members of the team to see if there are other ideas).
Does anyone know if the use of only one static active RFID tag are able to detect motion(eg. moving human or objects) by itself without any use of other extra tags or sensors?
You could be able to do it by doing a permanent inventory and getting both the time and the signal strength received from the tag in each session. Both will be an indication that either orientation or distance has changed, but they are not exclusivelly corelated (you could have a change of both factors even if the tag is not moving) so you should do extensive test before settling on the solution.
Since you are talking about an active tag, there are manufacturers that incorporate motion sensors into their tag in order to save battery (tag emits more often when its moving), so you should contact them to see which of them can allow you to gather data from the sensor.
If you are thinking about fixing the tag on a wall and have the tag detect when someone passes by it, I do not know that such a product exist: there are tags that have thermometers or even humidity sensors integrated but not area of interest motion detection, for this you can use a wireless motion sensor.
Is it possible to search for all iBeacons which are nearby? I know it's possible to search iBeacons by UUID. But i want to find all iBeacons nearby.
An iBeacon is a region, and has as defining property the UUID. Therefore, you can only search for the ones matching a UUID.
After you find one or more with a specific UUID, you can figure out which is closest using the delegate callbacks, where the beacons are stored in an array ordered by distance.
There is great sample code on this and also a pretty detailed WWDC video session: "What's new in Core Location"
iBeacons are higher-level constructs than regular BLE peripherals. From what can be determined from the Apple docs, beacons are tied to their service UUID. i.e., a family of beacons is a "region" and a you go into and out of a region based on the range and visibility of a beacon to YOU, not the other way around. Unfortunately Apple has used the term region, which most of us probably associate with MapKit, so this is adding to the general confusion
Here's the bad news: You can only scan for ProximityUUIDs that you know, there is no "wildcard" proximityUUID. Additionally, CLBeacons don't expose the much in the way of lower level CoreBluetooth guts so if you want to find all beacons that happen to be near you, you'll have to use CoreBluetooth, scan for peripherals, then look though the returned peripheries and query each one them to find beacons. Of course Apple has neither registered (with the Bluetooth SIG) or (yet) published the iBeacon characteristics so, you'll need a BT sniffer to reverse engineer what constitutes an iBeacon from any other BLE device.
each APP would use it's own specific UUID, using the "major" and "minor" integer values to differentiate between beacons.
for example, the UUID would be associated with a chain of shops, major would identify the shop, and minor the aisle, or even a group of products.
scanning for unknown UUID's would not be very useful, as your app would not know what to do with the information.
the UUID is generated once and for all, using the "uuidgen" command in the terminal.
sadly there is no protocol to actually communicate with beacons, hence there is no standard to get the location of a beacon, or any other useful info.
it would have been so much better if we could open a connection to a beacon, usually the closest one, and obtain additional data from it, without having to be on the same WIFI network.
you either have to use bonjour to communicate with the device over WIFI, or use the major and minor id to obtain data from a webservice of some kind.
Unfortunately you cannot at this time search for an arbitrary iBeacon without first knowing the proximityUUID value. I've tried writing directly to COREBluetooth and, although you can discover and connect to transmitting beacons in your area, what you get back is jibberish with no relation to the BLE UUID. So you can't even confirm that the peripheral you have connected to is in fact an iBeacon.
This does not appear to be a limitation of the BLE spec, rather it is a limitation that has been imposed by Apple. It also appears that this limitation does not exist for the Android platform.
Until this gap is closed, Android will have a significant advantage over iOS in this area.
I disagree with previous comments that scanning for UUIDs would be useless. On the contrary, if you knew the beacon UUID, you could create a map of beacon/location/subject in the cloud and use it to navigate (assuming the beacon was fixed) using a web service. You could crowd-source the data so that eventually a very rich database of beacon UUID/location pairs would be available to all who wanted to write location apps. Perhaps this is why Apple is hiding the info; they may be holding this back for their own purposes.
According to Radius Networks (authors of the AltBeacon spec and the android-beacon-library it's not possible to identify a beacon using CoreBluetooth
my graduate project is about Smart Attendance System for University using RFID.
What if one student have multiple cards (cheating) and he want to attend his friend as well? The situation here my system will not understand the human adulteration and it will attend the detected RFID Tags by the reader and the result is it will attend both students and it will store them in the database.
I am facing this problem from begging and it is a huge glitch in my system.
I need a solution or any idea for this problem and it can be implemented in the code or in the real live to identify the humans.
There are a few ways you could do this depending upon your dedication, the exact tech available to you, and the consistency of the environment you are working with. Here are the first two that come to mind:
1) Create a grid of reader antennae on the ceiling of your room and use signal response times to the three nearest readers to get a decent level of confidence as to where the student tag is. If two tags register as being too close, display the associated names for the professor to call out and confirm presence. This solution will be highly dependent upon the precision of your equipment and stability of temperature/humidity in the room (and possibly other things like liquid and metal presence).
2) Similar to the first solution, but a little different. Some readers and tags (Impinj R2000 and Indy Readers, Impinj Monza 5+ for sure, maybe others aswell) have the ability to report a response time and a phase angle associated with the signal received from an interrogated tag. Using a set up similar to the first, you can get a much higher level of reliability and precision if you use this method.
Your software could randomly pick a few names of attending people, so that the professor can ask them to identify themselves. This will not eliminate the possibility of cheating, but increase the risk of beeing caught.
Other idea: count the number of attendiees (either by the prof or by camera + SW) and compare that to the number of RfID tags visible.
There is no solution for this RFID limitation.
But if you could then you can use Biometric(fingerprint) recognition facility with RFID card. With this in your system you have to:
Integrate biometric scanner with your RFID reader
Store biometric data in your card
and while making attendance :
Read UID
Scan biometric by student
Match scanned biometric with your stored biometric(in the card :
step 2)
Make attendance (present if biometric matched, absent if no match)
Well, We all have that glitch, and you can do nothing about it, but with the help of a camera system, i think it would minimise this glitch.
why use a camera system and not a biometric fingerprint system? lets re-phrase the question, why use RFID if there is biometric fingerprint system ? ;)
what is ideal to use, is an RFID middleware that handle the tag reading.
once the reader detects a tag, the middleware simply call the security camera system and request for a snapshot, and store it in the db. I'm using an RFID middleware called Envoy.