Basic Transition Issues - philips-hue

I'm trying to write a basic script which uses the remote API to turn on my lights, then transition them to a certain color. The code for doing so using a custom SDK looks something like this:
group.SetState(hue.State{On: true, Bri: 0, Hue: 4000, TransitionTime: 0})
time.Sleep(1 * time.Second)
group.SetState(hue.State{TransitionTime: 300, Bri: 254, Hue: 11500, Sat: 0})
Where each of the SetState calls makes a call to the Group command API. Seems simple enough but I'm having a couple of issues:
Unless I dim the lights then turn them off first (or call the 'Cinema' formula from Hue Labs), when I call this code the lights come on at whatever brightness they were at before, seemingly ignoring the first setState call.
The brightness and saturation of the transition are ignored. All it does is transition Hue, though this behavior varies depending on if I include the sleep timer or not.
Any thoughts on what I am doing wrong?
EDIT: It looks like the API isn't even respecting the first statement's brightness setting. If I make the call to set it to 0, nothing happens.

It turns out the issues I described were due to json:"___,omitempty"being on the brightness, transition, and saturation settings. This means that when they have a value of 0 they are left out when marshalling to json in Go. facepalm.
Converting the values to int pointers fixed the issue for me.

Related

Why doesn't my Raycast2Ds detect walls from my tilemap

I am currently trying to make a small top-down RPG with grid movement.
To simplify things, when i need to make something move one way, I use a RayCast2D Node and see if it collides, to know if i can move said thing.
However, it does not seem to detect the walls ive placed until I am inside of them.
Ive already checked for collision layers, etc and they seem to be set up correctly.
What have i done wrong ?
More info below :
Heres the code for the raycast check :
func is_path_obstructed_by_obstacle(x,y):
$Environment_Raycast.set_cast_to(Vector2(GameVariables.case_width * x * rayrange, GameVariables.case_height * y * rayrange))
return $Environment_Raycast.is_colliding()
My walls are from a TileMap, with collisions set up. Everything that uses collisions is on the default layer for now
Also heres the function that makes my character move :
func move():
var direction_vec = Vector2(0,0)
if Input.is_action_just_pressed("ui_right"):
direction_vec = Vector2(1,0)
if Input.is_action_just_pressed("ui_left"):
direction_vec = Vector2(-1,0)
if Input.is_action_just_pressed("ui_up"):
direction_vec = Vector2(0,-1)
if Input.is_action_just_pressed("ui_down"):
direction_vec = Vector2(0,1)
if not is_path_obstructed(direction_vec.x, direction_vec.y):
position += Vector2(GameVariables.case_width * direction_vec.x, GameVariables.case_height * direction_vec.y)
grid_position += direction_vec
return
With ray casts always make sure it is enabled.
Just in case, I'll also mention that the ray cast cast_to is in the ray cast local coordinates.
Of course collision layers apply, and the ray cast has exclude_parent enabled by default, but I doubt that is the problem.
Finally, remember that the ray cast updates on the physics frame, so if you are using it from _process it might be giving you outdated results. You can call call force_update_transform and force_raycast_update on it to solve it. Which is also what you would do if you need to move it and check multiple times on the same frame.
For debugging, you can turn on collision shapes in the debug menu, which will allow you to see them when running the game, and see if the ray cast is positioned correctly. If the ray cast is not enabled it will not appear. Also, by default, they will turn red when they collide something.

OSG/OSGEarth How to move a Camera

I'm making a Drone simulator with OSGEarth. I never used OSG before so I have some troubles to understand cameras.
I'm trying to move a Camera to a specific location (lat/lon) with a specific roll, pitch and yaw in OSG Earth.
I need to have multiple cameras, so I use a composite viewer. But I don't understand how to move the particular camera.
Currently I have one with the EarthManipulator which works fine.
class NovaManipulator : publicosgGA::CameraManipulator {
osg::Matrixd NovaManipulator::getMatrix() const
{
//drone_ is only a struct which contains updated infomations
float roll = drone_->roll(),
pitch = drone_->pitch(),
yaw = drone_->yaw();
auto position = drone_->position();
osg::Vec3d world_position;
position.toWorld(world_position);
cam_view_->setPosition(world_position);
const osg::Vec3d rollAxis(0.0, 1.0, 0.0);
const osg::Vec3d pitchAxis(1.0, 0.0, 0.0);
const osg::Vec3d yawAxis(0.0, 0.0, 1.0);
//if found this code :
// https://stackoverflow.com/questions/32595198/controling-openscenegraph-camera-with-cartesian-coordinates-and-euler-angles
osg::Quat rotationQuat;
rotationQuat.makeRotate(
osg::DegreesToRadians(pitch + 90), pitchAxis,
osg::DegreesToRadians(roll), rollAxis,
osg::DegreesToRadians(yaw), yawAxis);
cam_view_->setAttitude(rotationQuat);
// I don't really understand this also
auto nodePathList = cam_view_->getParentalNodePaths();
return osg::computeLocalToWorld(nodePathList[0]);
}
osg::Matrixd NovaManipulator::getInverseMatrix() const
{
//Don't know why need to be inverted
return osg::Matrix::inverse(getMatrix());
}
};
Then I install the manupulator to a Viewer. And when I simulate the world, The camera is on the good place (Lat/Lon/Height).
But the orientation in completely wrong and I cannot find where I need to "correct" the axis.
Actually my drone is in France but the "up" vector is bad, it still head to North instead of "vertical" relatively to the ground.
See what I'm getting on the right camera
I need to have a yaw relative to North (0 ==> North), and when my roll and pitch are set to zero I need to be "parallel" to the ground.
Is my approach (by making a Manipulator) is the best to do that ?
Can I put the camera object inside the Graph node (behind a osgEarth::GeoTransform (it works for my model)) ?
Thanks :)
In the past, I have done a cute trick involving using an ObjectLocator object (to get the world position and plane-tangent-to-surface orientation), combined with a matrix to apply the HPR. There's an invert in there to make it into a camera orientation matrix rather than an object placement matrix, but it works out ok.
http://forum.osgearth.org/void-ObjectLocator-setOrientation-default-orientation-td7496768.html#a7496844
It's a little tricky to see what's going on in your code snippet as the types of many of the variables aren't obvious.
AlphaPixel does lots of telemetry / osgEarth type stuff, so shout if you need help.
It is probably just the order of your rotations - matrix and quaternion multiplications are order dependent. I would suggest:
Try swapping order of pitch and roll in your MakeRotate call.
If that doesn't work, set all but 1 rotation to 0° at a time, making sure each is what you expect, then you can play with the orders (there are only 6 possible orders).
You could also make individual quaternions q1, q2, q3, where each represents h,p, and r, individually, and multiply them yourself to control the order. This is what the overload of MakeRotate you're using does under the hood.
Normally you want to do your yaw first, then your pitch, then your roll (or skip roll altogether if you like), but I don't recall off-hand whether osg::quat concatenates in a pre-mult or post-mult fashion, so it could be p-r-y order.

In Processing, which is the fastest way to delete previous text

I tried to rewrite text with the backgroud color, but the edge (outline) of the old text remains on the screen. I have no idea why. Can you please help me?
background(-1);
noLoop();
fill(#500F0F);
text("99", 300, 200);
fill(-1);
text("99",300, 200);
Outcome:
In the future, please try to post a MCVE. The code in your post draws the text completely off the screen, which makes me wonder what else is different in your real code. Are you using a draw() function? Please avoid these uncertainties by posting a MCVE.
Anyway, your basic problem is caused by anti-aliasing. By default, Processing uses anti-aliasing to make drawings appear more smooth and less pixelated. You can see this if you zoom in to a drawing and notice the edges are a bit blurry. This is a good thing for most drawings, but in your case it's causing the blurry edges to show through.
So, to fix that problem, you could disable anti-aliasing by calling the noSmooth() function:
size(500, 500);
noSmooth();
background(255);
noLoop();
fill(#500F0F);
text("99", 300, 200);
fill(255);
text("99",300, 200);
Also notice that I'm using 255 as a paramter instead of -1. I'm not sure what a color parameter of -1 is supposed to do, so I'd keep it between 0 and 255 just to be safe.
But it's a little fishy that you need to "delete" any text in the first place. Like George's comment says, why don't you just call the background() function to clear out old frames?
Here's a small example:
void draw() {
background(64);
if (mousePressed) {
text("hello", 20, 40);
}
}

X11 FocusIn is not working

As long as I understand it, the X11 FocusIn event is triggered whenever the window comes into focused. It the the window that the keyboard input is sent to. I am having trouble triggering this event. I have made sure to give it the FocusChangeMask when creating the window. I have created a breakpoint inside my event handler where the FocusIn event is supposed to happen and it is not stopping.
I have 2 separate window, a transparent one and one non-transparent. Currently I have it so the transparent window is always on top of the non transparent window. Whenever I switch focus and then switch back to the transparent window, the non-transparent window is directly underneath. This causes other windows to be stuck in 'between' the transparent and non-transparent window.
I have noticed that whenever I focus on the non-transparent window that is underneath that triggers the FocusIn event. I can not get the transparent window to trigger the event. Does this have something to do with the window being in 32-bit color?
What am I missing?
while(!renderer->stop)
{
XNextEvent(renderer->x_display, &event);
switch(event.type)
{
case Expose:
if (event.xexpose.window == gstreamer_window)
{
XRaiseWindow(renderer->x_display, renderer->opengl_window);
}
break;
case FocusIn:
if (event.xfocus.window == renderer->opengl_window)
{
XRaiseWindow(renderer->x_display, gstreamer_window);
}
break;
case ConfigureNotify:
if (event.xconfigure.window == renderer->opengl_window)
{
XMoveWindow(renderer->x_display, gstreamer_window,
event.xconfigure.x, event.xconfigure.y - top_border_offset);
}
break;
}
}
Here is how I created the window.
XSetWindowAttributes swa;
swa.event_mask = ExposureMask | PointerMotionMask | KeyPressMask | FocusChangeMask;
swa.colormap = XCreateColormap(x_display, XDefaultRootWindow(x_display), visual, AllocNone);
swa.background_pixel = 0;
swa.border_pixel = 0;
/* Create a window */
opengl_window = XCreateWindow (
x_display, parent,
0, 0, m_plane_width, m_plane_height, 0,
depth, InputOutput,
visual, CWEventMask | CWBackPixel | CWColormap | CWBorderPixel,
&swa );
It appears as though I set the FocusChangeMask at the wrong place. By adding the line XSelectInput(x_display, opengl_window, FocusChangeMask)
, it now triggers the FocusIn event. It was triggering the other display because it had the mask but this one didn't.
It's not clear from your question whether the windows referenced in your question are top level application windows, or secondary, ancillary windows that are child windows of your top level application windows.
But in all cases, proper input focus handling requires you to inform the window manager exactly how your application expects to handle input focus.
See section 4.1.7 of the ICCCM specification for more information. Unfortunately, it's not enough just to code an event loop that handles FocusIn messages, and expect them to fall out of the sky. First, you need to tell your window manager how exactly you are going to handle input focus switching, then respond to the window manager's messages, as explained in the ICCCM specification; perhaps by explicitly sending the SetInputFocus requests for your own application windows.
See my post under OP's self accepted answer for background
I had the same problem and it seems if you use XSelectInput it overrides the event mask for a window, instead of adding to it. You need to either:
skip adding XSelectInput and set the event masks on the window when created
or add all of the masks for the window to the XSelectInput command
Here's my similar setup which worked before I added the XSelectInput line:
swa.event_mask = StructureNotifyMask | SubstructureNotifyMask | FocusChangeMask | ExposureMask | KeyPressMask;
wnd = XCreateWindow(disp, wndRoot, left, top, width, height, 0, vi->depth, InputOutput, vi->visual, CWColormap | CWEventMask, &swa);
XMapWindow(disp, wnd);
When I added this for capturing key input:
XSelectInput(disp, wnd, KeyPressMask | KeyReleaseMask);
everything else stopped working, which I didn't realize for a while, until now, because I was just testing keys.
I have a single window application with no controls in the client area, so I don't know what else XSelectInput does for a window that already has the masks, but I do get key up and down messages without it. In the OP's self accepted answer he just goes the other way and adds 'FocusChangeMask' to XSelectInput instead.
The documentation for a lot of X11 is, in my opinion, a lot like the standard Apple documentation where it has the affliction of never saying what the function really actually does or affects, but tells you all about everything else. (See bottom snippet here) To be fair a good portion of the Cocoa docs are pretty decent, but maybe half are not. That goes for many or most of the parameters as well.
From Linux man page which I think is the maybe verbatim the same as the Xlib manual here https://tronche.com/gui/x/xlib/
The XSelectInput function requests that the X server report the events associated with the specified event mask.
Setting the event-mask attribute of a window overrides any previous call for the same window but not for other clients.
(Below is my own perhaps incorrect rewording of the XInputSelect docs)
This can be interpreted as perhaps:
XSelectInput sets the event-mask on a window for a display for events reported to the client. Each client can set its own independent mask for events it wants to receive with the following exceptions (skipping the exceptions).
Now the critical part:
XInputSelect will override and replace for a client any previous event-mask settings for a window on a display, including any event-mask set when creating the window.
And what I wish it was written as to make sure its understood, if true, that the XInputSelect does nothing else like enable key to character interpretation and what not, because I believe the core confusion is that people from all the various examples out there get the impression that there is some other magic besides setting mask. I'm assuming there's not, and the docs speak of no other magic besides the mask.
XInputSelect sets the event-mask for a window and allows for setting an event mask after a window is created.
And my opinion is that that line should have been first.

iPhone help with animations CGAffineTransform resetting?

Hi I am totally confused with CGAffineTransform animations. All I want to do is move a sprite from a position on the right to a position on the left. When it has stopped I want to "reset" it i.e. move it back to where it started. If the app exits (with multitasking) I want to reset the position again on start and repeat the animation.
This is what I am using to make the animation..
[UIImageView animateWithDuration:1.5
delay:0.0
options:(UIViewAnimationOptionAllowUserInteraction |
UIViewAnimationOptionCurveLinear
)
animations:^(void){
ufo.transform = CGAffineTransformTranslate(ufo.transform, -270, 100);
}
completion:^(BOOL finished){
if(finished){
NSLog(#"ufo finished");
[self ufoAnimationDidStop];
}
}];
As I understand it the CGAffineTransforms just visually makes the sprite look like it's moved but doesn't actually move it. Therefore when I try and "reset" the position using
ufo.center = CGPointMake(355, 70);
it doesn't do anything.
I do have something working, if I call
ufo.transform = CGAffineTransformTranslate(ufo.transform, 270, -100);
it resets. The problem is if I exit the app half way through the animation then when it restarts it doesn't necessarily start from the beginning and it doesn't go the the right place, it basically goes crazy!
Is there a way to just remove any transforms applied to it? I'm considering just using a timer but this seems silly when this method should work. I;ve been struggling with this for some time so any help would be much appreciated.
Thanks
Applying a transform to a view doesn't actually change the center or the bounds of the view; it just changes the way the view is shown on the screen. You want to set your transform back to CGAffineTransformIdentity to ensure that it looks like "normal." You can set it to that before you start your animation and set it to what you want it to animate to.

Resources