THREE.js in node.js environment - node.js

Due to making some simple multiplayer game, I have chosen THREE.js for implementing graphics at browser side. At the browser everything works fine.
Then I thought:
Server have to check out most of user actions. So I WILL need to have world copy on a server, interact it with users and then give it's state back to users.
So, As the good piece of code had been written for client side - I just made it node.js compatible and moved on. (Good collision detection, which could use object.geometry - is what I wanted so bad)
As a result, collision detection code stopped working. On the server side Raycaster exits on
the string
} else if ( object instanceof THREE.Mesh ) {
var geometry = object.geometry;
// Checking boundingSphere distance to ray
if ( geometry.boundingSphere === null ) geometry.computeBoundingSphere();
sphere.copy( geometry.boundingSphere );
sphere.applyMatrix4( object.matrixWorld );
if ( raycaster.ray.isIntersectionSphere( sphere ) === false ) {
return intersects; // _HERE_
}
And that happens, because object.matrixWorld Is Identity matrix.
But object initialization is made. mesh.position and mesh.rotation are identical on server and client( in browser, raycaster works as a charm);
I thinking, that, object.matrixWorld would update somewhere in renderer.render(self.three_scene, self.camera);. But of course, that's not what I want to do at server side.
So the question is: How to make object.matrixWorld update in each simulation tick on the server-side?
Or, maybe advice, if there's some other way to get something simular to what I want.

Okey.
That was simple.
renderer.render updates matrices of the whole scene recursively. The entrance of the recursion is updateMatrixWorld() function of Object3D instance.
So, before we use Raycaster on the server-side we should call this method for each mesh in collidable meshes list.

Related

get_node crashing game when node is deleted

First of all, I'm new to programming in general so I'm kind of assuming there's a simple answer to this question, I just couldn't seem to find it anywhere.
I'm making a simple platformer game with enemies that move toward the player. I used this code in the enemy's script underneath the physics process to get the player position:
player_position = get_parent().get_node("Player").get_position
However, upon the player being queue_freed when health reaches 0, the game crashes immediately and I get a null error due to there being no Player node. How can I work around this?
You could just set $Player.visibility to false instead of freeing, or you could check if the player exists first using get_parent().has_node("Player")
When you destroy the player, the physics process function is still trying to get the player node, even though it doesn't exist. So, as Lucas said, you could replace:
player_position = get_parent().get_node("Player").get_position
with something like...
if get_parent().has_node("Player"):
player_position = get_parent().get_node("Player").get_position
(Before setting player_position, it will check if the player node even exists)
I think you could use weakref (documentation here).
If you declare a weak reference:
var player_ref: WeakRef = null
and store that reference:
func store_player_node():
var player_ref = weakref(get_parent().get_node("Player"))
Then you can access that reference later:
if player_ref.get_ref():
player_position = player_ref.get_ref().get_position
Weakref has advantage over using get_parent().get_node("Player"). Let's imagine the following scenario:
Player dies and its node is removed from the parent node's children.
New node is created with name Player and added to the scene tree at the same place as the dead Player.
get_parent().get_node("Player") will return a node and the code will not crash. However, it will be a new Player node, not the old one, and I think this is usually undesired.
I hope this helps!

Can I trigger the Hololens Calibration sequence from inside my application?

I have a hololens app I am creating that requires the best accuracy possible for hologram placement. This application will be used by numerous individuals. Whenever I try to show the application progress, I have to have the user go through the calibration process, otherwise the holograms appear to have way too much drift.
I would like to be able to call the hololens calibration process automatically when the application opens. Later, after I set up user authentication and id management, I will call the calibration process when a new user is found.
https://learn.microsoft.com/en-us/windows/mixed-reality/calibration
I have looked into the calibration (via the above documentation and elsewhere) and it seems that all it is setting is IPD. However the alternative solutions I have found that allow for dynamic ipd adjustment appear to be invalid for UWP Store apps. This makes them unusable for me.
I am looking for any help or direction, or if this is even possible. Thank you.
Yes, it is possible to to this, you need to use the LaunchUriAsync protocol to launch the following URI: ms-hololenssetup://EyeTracking
Here is an example implementation, obtained from the LaunchUri example in MRTK
public void LaunchEyeTracking()
{
#if WINDOWS_UWP
UnityEngine.WSA.Application.InvokeOnUIThread(async () =>
{
bool result = await global::Windows.System.Launcher.LaunchUriAsync(new System.Uri("ms-hololenssetup://EyeTracking"));
if (!result)
{
Debug.LogError("Launching URI failed to launch.");
}
}, false);
#else
Debug.LogError("Launching eye tracking not supported Windows UWP");
#endif
}

nodejs vs. ruby / understanding requests processing order

I have a simple utility that i use to size image on the fly via url params.
Having some troubles with the ruby image libraries (cmyk to rvb is, how to say… "unavailable"), i gave it a shot via nodejs, which solved the issue.
Basically, if the image does not exists, node or ruby transforms it. Otherwise when the image has already been requested/transformed, the ruby or node processes aren't touched, the image is returned statically
The ruby works perfectly, a bit slow if lot of transforms are requested at once, but very stable, it always go through whatever the amount (i see the images arriving one the page one after another)
With node, it works also perfectly, but when a large amount of images are requested, for a single page load, the first images is transformed, then all the others requests returns the very same image (the last transformed one). If I refresh the page, the first images (already transformed) is returned right away, the second one is returned correctly transformed, but then all the other images returned are the same as the one just newly transformed. and it goes on the same for every refresh. not optimal , basically the resquests are "merged" at some point and all return the same image. for reason i don't understand
(When using 'large amount', i mean more than 1)
The ruby version :
get "/:commands/*" do |commands,remote_path|
path = "./public/#{commands}/#{remote_path}"
root_domain = request.host.split(/\./).last(2).join(".")
url = "https://storage.googleapis.com/thebucket/store/#{remote_path}"
img = Dragonfly.app.fetch_url(url)
resized_img = img.thumb(commands).to_response(env)
return resized_img
end
The node js version :
app.get('/:transform/:id', function(req,res,next){
parser.parse(req.params,function(resized_img){
// the transform are done via lovell/sharp
// parser.parse parse the params, write the file,
// return the file path
// then :
fs.readFileSync(resized_img, function(error,data){
res.write(data)
res.end()
})
})
})
Feels like I'm missing here a crucial point in node. I expected the same behaviour with node and ruby, but obviously the same pattern transposed in the node area just does not work as expected. Node is not waiting for a request to process, rather processes those somehow in an order that is not clear to me
I also understand that i'm not putting the right words to describe the issue, hoping that it might speak to some experienced users, let them provide clarifiactions to get a better understanding of what happens behind the node scenes

Automatic pupil detection with OpenCV & node.js

I'm embarking on a project that involves creating a online tool to measure the pupillary distance of a person via a webcam/camera still photo. The tricky part has to do with automatic detection of the pupils in the photo. I have little/no experience with image processing of this kind, but I've been doing some research.
So far I'm considering using openCV through node.js using this available library: https://github.com/peterbraden/node-opencv.
Am I at all on the right track? The capabilities of this library seem limited compared to more developed ones for C++/java/python/etc, but the timeline for this project doesn't allow for my learning a new language in the process.
Just wanted to reach out to anyone with more experience with this kind of thing, and any tips/etc are more than welcome. Thanks!
I'm not sure about pupil detection, but eye detection is not hard, see this sample CoffeeScript:
opencv = require "opencv"
detect = (im) ->
im.detectObject "./node_modules/opencv/data/haarcascade_mcs_eyepair_small.xml", {}, (err, eyepairs) ->
im.rectangle [eyepair.x, eyepair.y], [eyepair.x+eyepair.width, eyepair.y+eyepair.height] for eyepair in eyepairs
for eyepair in eyepairs
if ((eyepair.x - lasteyepair.x) ** 2 + (eyepair.y - lasteyepair.y) ** 2 < 500)
lasteyepair = eyepair
foundEye = true
im2 = im.roi Math.max(eyepair.x-10,0), Math.max(eyepair.y-10,0), Math.min(eyepair.width+20,im.width()), Math.min(eyepair.height+20,im.height())
im2.detectObject "./node_modules/opencv/data/haarcascade_eye.xml", {}, (err, eyes) ->
im.rectangle [Math.max(eyepair.x-10+eye.x,0), Math.max(eyepair.y-10+eye.y,0)], [eyepair.x-10+eye.x+eye.width, eyepair.y-10+eye.y+eye.height] for eye in eyes
console.log "eyes", eyes
im.save "site/webcam.png"
camera = new opencv.VideoCapture 0
capture = () ->
camera.read (err, im) ->
if err
camera.close()
console.error err
else
detect im
setTimeout capture, 1000
setTimeout capture, 2000
Detection of objects work with the Viola-Jones method, which is being called asynchronously by detectObject. Once the detection is finished, a callback is called that can process the positions and sizes of the found objects.
I do the detection in two stages: First I detect eyepairs which is reasonably fast and stable, then I crop the image to a rectangle around the eyepair and detect eyes inside of that. If you want to detect pupils, you should then find a cascade for pupil detection (shouldn't be too hard since a pupil is basically a black dot), crop the image around each eye and detect the pupil in there.
Addendum:
My code has a little bug: im.save is being called multiple times for every eyepair, whereas it should only be called in the very last callback.

How to sketch out an event-driven system?

I'm trying to design a system in Node.js (an attempt at solving one of my earlier problems, using Node's concurrency) but I'm running into trouble figuring out how to draw a plan of how the thing should operate.
I'm getting very tripped up thinking in terms of callbacks instead of returned values. The flow isn't linear, and it's really boggling my ability to draft it. How does one draw out an operational flow for an event-driven system?
I need something I can look at and say "Ok, yes, that's how it will work. I'll start it over here, and it will give me back these results over here."
Pictures would be very helpful for this one. Thanks.
Edit: I'm looking for something more granular than UML, specifically, something that will help me transition from a blocking and object-oriented programming structure, where I'm comfortable, to a non-blocking and event driven structure, where I'm unfamiliar.
Based on http://i.stack.imgur.com/9DDQP.png what you need is a good flow library that allows you to pipeline sync and async calls in node.
One such library is https://github.com/isaacs/slide-flow-control (look at the slide preso there too) and the code outline for what you need to do is below.
It is self documenting and as you see it is quite concise, pure nodejs, uml, img's etc. not required.
var chain = require("slide/chain")
, asyncMap = require("slide/async-map")
;
// start processing
main_loop(function() {
console.log("its done"); // when finished
});
function main_loop(cb) {
var res = [];
// each entry in chain below fires sequentially i.e. after
// the previous function completes
chain
( [ [start_update_q, "user-foo"]
, [get_followed_users, chain.last]
, [get_favorites, chain.last]
, [calc_new_q]
, [push_results, chain.last]
]
, res
, cb
)
}
function get_favorites(users, cb) {
function fn(user, cb_) {
get_one_users_favorites(user, cb_);
}
// this will run thru get_favorites in parallel
// and after all user favorites are gotten it will fire
// callback cb
asyncMap(users, fn, cb);
}
// code in the various functions in chain here,
// remember to either return the callback on completion.
// or pass it as an arg to the async call you make within the
// function as above i.e. asyncMap will fire cb on completion
UML might be appropriate. I'd look at the behavior diagrams.

Resources