Automatic pupil detection with OpenCV & node.js - node.js

I'm embarking on a project that involves creating a online tool to measure the pupillary distance of a person via a webcam/camera still photo. The tricky part has to do with automatic detection of the pupils in the photo. I have little/no experience with image processing of this kind, but I've been doing some research.
So far I'm considering using openCV through node.js using this available library: https://github.com/peterbraden/node-opencv.
Am I at all on the right track? The capabilities of this library seem limited compared to more developed ones for C++/java/python/etc, but the timeline for this project doesn't allow for my learning a new language in the process.
Just wanted to reach out to anyone with more experience with this kind of thing, and any tips/etc are more than welcome. Thanks!

I'm not sure about pupil detection, but eye detection is not hard, see this sample CoffeeScript:
opencv = require "opencv"
detect = (im) ->
im.detectObject "./node_modules/opencv/data/haarcascade_mcs_eyepair_small.xml", {}, (err, eyepairs) ->
im.rectangle [eyepair.x, eyepair.y], [eyepair.x+eyepair.width, eyepair.y+eyepair.height] for eyepair in eyepairs
for eyepair in eyepairs
if ((eyepair.x - lasteyepair.x) ** 2 + (eyepair.y - lasteyepair.y) ** 2 < 500)
lasteyepair = eyepair
foundEye = true
im2 = im.roi Math.max(eyepair.x-10,0), Math.max(eyepair.y-10,0), Math.min(eyepair.width+20,im.width()), Math.min(eyepair.height+20,im.height())
im2.detectObject "./node_modules/opencv/data/haarcascade_eye.xml", {}, (err, eyes) ->
im.rectangle [Math.max(eyepair.x-10+eye.x,0), Math.max(eyepair.y-10+eye.y,0)], [eyepair.x-10+eye.x+eye.width, eyepair.y-10+eye.y+eye.height] for eye in eyes
console.log "eyes", eyes
im.save "site/webcam.png"
camera = new opencv.VideoCapture 0
capture = () ->
camera.read (err, im) ->
if err
camera.close()
console.error err
else
detect im
setTimeout capture, 1000
setTimeout capture, 2000
Detection of objects work with the Viola-Jones method, which is being called asynchronously by detectObject. Once the detection is finished, a callback is called that can process the positions and sizes of the found objects.
I do the detection in two stages: First I detect eyepairs which is reasonably fast and stable, then I crop the image to a rectangle around the eyepair and detect eyes inside of that. If you want to detect pupils, you should then find a cascade for pupil detection (shouldn't be too hard since a pupil is basically a black dot), crop the image around each eye and detect the pupil in there.
Addendum:
My code has a little bug: im.save is being called multiple times for every eyepair, whereas it should only be called in the very last callback.

Related

ArcGIS - How to move a graphic or symbol on the map

I'm trying to create an overlay in ArcGIS that has moving graphics/symbols which are updated by coordinates received from roving devices. I'm able to display a simple symbol initially but cannot get it to move on the map. My test code is
GraphicsOverlay machineOverlay = new GraphicsOverlay();
MainMapView.GraphicsOverlays.Add(machineOverlay);
MapPointBuilder rdLocation = new MapPointBuilder(150.864119200149, -32.3478640837185, SpatialReferences.Wgs84);
SimpleMarkerSymbol sRD1234 = new SimpleMarkerSymbol()
{
Color = System.Drawing.Color.Red,
Size = 10,
Style = SimpleMarkerSymbolStyle.Circle
};
Graphic graphicWithSymbol = new Graphic(rdLocation.ToGeometry(), sRD1234);
machineOverlay.Graphics.Add(graphicWithSymbol);
// here the red circle is displayed correctly on the map
rdLocation.SetXY(150.887115, -32.357600);
rdLocation.ReplaceGeometry(rdLocation.ToGeometry());
// here I expect the red circle to move but it doesn't
Do I need to trigger an event to "re-render" or refresh the overlay, or what do I need to do to get the graphic to move on my map?
There was a similar question here and the answer was "just update the geometry" which is what I'm attempting to do, but with no success.
If there is an entirely different or better approach to moving markers on a map please suggest, I'm just getting started in the ArcGIS runtime.
Thanks
After a lot of searching I replaced one line of code and its now working
//rdLocation.ReplaceGeometry(rdLocation.ToGeometry());
graphicWithSymbol.Geometry = rdLocation.ToGeometry();
It seems I misunderstood the function of ReplaceGeometry(). Any clarification on this would be helpful.

Why doesn't my Raycast2Ds detect walls from my tilemap

I am currently trying to make a small top-down RPG with grid movement.
To simplify things, when i need to make something move one way, I use a RayCast2D Node and see if it collides, to know if i can move said thing.
However, it does not seem to detect the walls ive placed until I am inside of them.
Ive already checked for collision layers, etc and they seem to be set up correctly.
What have i done wrong ?
More info below :
Heres the code for the raycast check :
func is_path_obstructed_by_obstacle(x,y):
$Environment_Raycast.set_cast_to(Vector2(GameVariables.case_width * x * rayrange, GameVariables.case_height * y * rayrange))
return $Environment_Raycast.is_colliding()
My walls are from a TileMap, with collisions set up. Everything that uses collisions is on the default layer for now
Also heres the function that makes my character move :
func move():
var direction_vec = Vector2(0,0)
if Input.is_action_just_pressed("ui_right"):
direction_vec = Vector2(1,0)
if Input.is_action_just_pressed("ui_left"):
direction_vec = Vector2(-1,0)
if Input.is_action_just_pressed("ui_up"):
direction_vec = Vector2(0,-1)
if Input.is_action_just_pressed("ui_down"):
direction_vec = Vector2(0,1)
if not is_path_obstructed(direction_vec.x, direction_vec.y):
position += Vector2(GameVariables.case_width * direction_vec.x, GameVariables.case_height * direction_vec.y)
grid_position += direction_vec
return
With ray casts always make sure it is enabled.
Just in case, I'll also mention that the ray cast cast_to is in the ray cast local coordinates.
Of course collision layers apply, and the ray cast has exclude_parent enabled by default, but I doubt that is the problem.
Finally, remember that the ray cast updates on the physics frame, so if you are using it from _process it might be giving you outdated results. You can call call force_update_transform and force_raycast_update on it to solve it. Which is also what you would do if you need to move it and check multiple times on the same frame.
For debugging, you can turn on collision shapes in the debug menu, which will allow you to see them when running the game, and see if the ray cast is positioned correctly. If the ray cast is not enabled it will not appear. Also, by default, they will turn red when they collide something.

Material using plain colours getting burnt when using THREE.ACESFilmicToneMapping

We are updating our three.js app setup so that it uses the THREE.ACESFilmicToneMapping (because our scene uses IBL from an EXR environment map).
In that process, materials using textures are now looking great (map colours used to be washed out before the change as illustrated below).
with renderer.toneMapping = THREE.LinearToneMapping (default)
with renderer.toneMapping = THREE.ACESFilmicToneMapping
However, the problem is that plain colours (without any maps) are now looking burnt...
with renderer.toneMapping = THREE.LinearToneMapping (default)
with renderer.toneMapping = THREE.ACESFilmicToneMapping
It's now totally impossible to get bright yellow or green for example. Turning down the renderer.toneMappingExposure or the material.envMapIntensity can help, but materials with textures then get way too dark... Ie. provided any given parameter, material using plain colours are either too bright, or material using textures are too dark.
I'm not sure if I am missing something, but this looks like there would be an issue in this setup. Would there be any other parameter that we are overlooking that is causing this result?
Otherwise, we are loading all our models using the GLTFLoader, and we have renderer.outputEncoding = THREE.sRGBEncoding; as per the documentation of the GLTFLoader.
Our environment map is an equirectangular EXR loaded with EXRLoader:
import { EXRLoader } from 'three/examples/jsm/loaders/EXRLoader';
const envMapLoader = new EXRLoader();
envMapLoader.load(
environmentMapUrl,
rawTexture => {
const pmremGenerator = new THREE.PMREMGenerator(renderer);
pmremGenerator.compileEquirectangularShader();
const envMapTarget = pmremGenerator.fromEquirectangular(rawTexture);
const { texture } = envMapTarget;
return texture;
},
...
)
The short answer is that this is expected behaviour and there will always be tradeoffs in lightning/colors. One has thus to empirically select settings depending on the specific setup/application and desired results.
From Don McCurdy's comment directly on my question above:
You may need to go to the three.js forums for this question. There is no quick fix to add HDR lighting to colors that are already
100% saturated. Lighting is not a simple topic, and different
tonemapping methods make different tradeoffs here.

OSG/OSGEarth How to move a Camera

I'm making a Drone simulator with OSGEarth. I never used OSG before so I have some troubles to understand cameras.
I'm trying to move a Camera to a specific location (lat/lon) with a specific roll, pitch and yaw in OSG Earth.
I need to have multiple cameras, so I use a composite viewer. But I don't understand how to move the particular camera.
Currently I have one with the EarthManipulator which works fine.
class NovaManipulator : publicosgGA::CameraManipulator {
osg::Matrixd NovaManipulator::getMatrix() const
{
//drone_ is only a struct which contains updated infomations
float roll = drone_->roll(),
pitch = drone_->pitch(),
yaw = drone_->yaw();
auto position = drone_->position();
osg::Vec3d world_position;
position.toWorld(world_position);
cam_view_->setPosition(world_position);
const osg::Vec3d rollAxis(0.0, 1.0, 0.0);
const osg::Vec3d pitchAxis(1.0, 0.0, 0.0);
const osg::Vec3d yawAxis(0.0, 0.0, 1.0);
//if found this code :
// https://stackoverflow.com/questions/32595198/controling-openscenegraph-camera-with-cartesian-coordinates-and-euler-angles
osg::Quat rotationQuat;
rotationQuat.makeRotate(
osg::DegreesToRadians(pitch + 90), pitchAxis,
osg::DegreesToRadians(roll), rollAxis,
osg::DegreesToRadians(yaw), yawAxis);
cam_view_->setAttitude(rotationQuat);
// I don't really understand this also
auto nodePathList = cam_view_->getParentalNodePaths();
return osg::computeLocalToWorld(nodePathList[0]);
}
osg::Matrixd NovaManipulator::getInverseMatrix() const
{
//Don't know why need to be inverted
return osg::Matrix::inverse(getMatrix());
}
};
Then I install the manupulator to a Viewer. And when I simulate the world, The camera is on the good place (Lat/Lon/Height).
But the orientation in completely wrong and I cannot find where I need to "correct" the axis.
Actually my drone is in France but the "up" vector is bad, it still head to North instead of "vertical" relatively to the ground.
See what I'm getting on the right camera
I need to have a yaw relative to North (0 ==> North), and when my roll and pitch are set to zero I need to be "parallel" to the ground.
Is my approach (by making a Manipulator) is the best to do that ?
Can I put the camera object inside the Graph node (behind a osgEarth::GeoTransform (it works for my model)) ?
Thanks :)
In the past, I have done a cute trick involving using an ObjectLocator object (to get the world position and plane-tangent-to-surface orientation), combined with a matrix to apply the HPR. There's an invert in there to make it into a camera orientation matrix rather than an object placement matrix, but it works out ok.
http://forum.osgearth.org/void-ObjectLocator-setOrientation-default-orientation-td7496768.html#a7496844
It's a little tricky to see what's going on in your code snippet as the types of many of the variables aren't obvious.
AlphaPixel does lots of telemetry / osgEarth type stuff, so shout if you need help.
It is probably just the order of your rotations - matrix and quaternion multiplications are order dependent. I would suggest:
Try swapping order of pitch and roll in your MakeRotate call.
If that doesn't work, set all but 1 rotation to 0° at a time, making sure each is what you expect, then you can play with the orders (there are only 6 possible orders).
You could also make individual quaternions q1, q2, q3, where each represents h,p, and r, individually, and multiply them yourself to control the order. This is what the overload of MakeRotate you're using does under the hood.
Normally you want to do your yaw first, then your pitch, then your roll (or skip roll altogether if you like), but I don't recall off-hand whether osg::quat concatenates in a pre-mult or post-mult fashion, so it could be p-r-y order.

THREE.js in node.js environment

Due to making some simple multiplayer game, I have chosen THREE.js for implementing graphics at browser side. At the browser everything works fine.
Then I thought:
Server have to check out most of user actions. So I WILL need to have world copy on a server, interact it with users and then give it's state back to users.
So, As the good piece of code had been written for client side - I just made it node.js compatible and moved on. (Good collision detection, which could use object.geometry - is what I wanted so bad)
As a result, collision detection code stopped working. On the server side Raycaster exits on
the string
} else if ( object instanceof THREE.Mesh ) {
var geometry = object.geometry;
// Checking boundingSphere distance to ray
if ( geometry.boundingSphere === null ) geometry.computeBoundingSphere();
sphere.copy( geometry.boundingSphere );
sphere.applyMatrix4( object.matrixWorld );
if ( raycaster.ray.isIntersectionSphere( sphere ) === false ) {
return intersects; // _HERE_
}
And that happens, because object.matrixWorld Is Identity matrix.
But object initialization is made. mesh.position and mesh.rotation are identical on server and client( in browser, raycaster works as a charm);
I thinking, that, object.matrixWorld would update somewhere in renderer.render(self.three_scene, self.camera);. But of course, that's not what I want to do at server side.
So the question is: How to make object.matrixWorld update in each simulation tick on the server-side?
Or, maybe advice, if there's some other way to get something simular to what I want.
Okey.
That was simple.
renderer.render updates matrices of the whole scene recursively. The entrance of the recursion is updateMatrixWorld() function of Object3D instance.
So, before we use Raycaster on the server-side we should call this method for each mesh in collidable meshes list.

Resources