In Tizen, Orientation sensor value is always -1 - sensors

By using Sensor API, I want to get the Orientation sensor values.
(sensor.h)
I followed tutorial (URL: https://developer.tizen.org/development/tutorials/native-application/system/sensor?langredirect=1#retrieve), wrote same code except selecting Orientation sensor.
Lastly, when I called 'seonsor_listener_read_data()' function, I got just '-1'
But I knew the azimuth range is 0 <= and <= 360. I think it's weird.
Before I googled, somebody said that it might be broken sensor. But I'm not sure.
Any guys know how to solve it? or know reasons?

The Gear S2 doesn't support the orientation sensor.

Related

2D Godot raycastin is unituitive to use & 2D Global coordinates and rotation

Hey so i am working on a interactive mini game in my work. New to godot!
Item.rotate(rand_range(-1.22173,1.22173)) just shooting missiles apromaxily 70 degrees at random. Its doing more like 140 degrees and i don't know why when i rotate.
Item = ammo.instance()
Item.position = Vector2(960,0)
Item.rotate(rand_range(-1.22173,1.22173))
add_child(Item)
bulletArray.append(Item)
var rot = Item.rotation
angle = rot + PI/2
angleArray.append(angle)
then when i shoot it with bullet array loop:
x.position += Vector2(1, 0).rotated(angleArray[index]) * speed * delta
its like 1-3 degreees off sometimes you can slightly notice it so its weird.
I wanted raycasting to just use some visual elements there maybe add collider later dont know. I tried
var randomvec = Vector2(rand_range(-400, 1120),1080)
with local coordinates when y is = 0 window size is 1080 y. It jus shoots so much wider than it should. And theres no documentation i can find. when i used something more like ((-30,30),1080)*MAX (max was = 1050) i was getting better results.
Is there any reliable way to use global coordinates on 2D games. Mostly for everything.
Thanks ! Just trying something new to this engine.
Its good enough but its not perfect angle how to fix that ?
I tried those codes shooting works and angle is 98% perfect. Raycasting was really hard to get on right area on random. Tried looking some documentation but cant figure it out.
Easy way to just use global coordsinates for almost everything ?
Okay im an idiot im working 12 hours a daý..... i just need to Item.rotate(rand_range(-1.22173,1.22173)) / 2 Sorry about that im really overworking..... of course its 140 degrees.... but raycasting question and angle couple degrees off still stands. It was still better gameplay when it was little wider so i try 90 pii/2. lol haha. Need to have a break.
As you must have figured out 1.22173 radians is 70º. Which means the range from -1.22173 to 1.22173 is a range from -70º to 70º which is a total of 140º.
I will also remind you that you can use the to_local and to_global methods to convert from and to global coordinates.
Furthermore in 2D you can use global_rotation.
And if you want to transform a direction, you can use node2d.global_transform.basis_xform(direction) (or use basis_xform_inv for the inverse).
its like 1-3 degreees off sometimes you can slightly notice it so its weird.
I'm guessing this is floating point error. Work with the rotated direction (a unit vector), instead of storing an angle and rotating again. Or use the transform of the Node2D as mentioned above.

dithering for darker tones

(english is not my native tongue and it's pretty late).
I am facing a problems for some days and, after many not successfull attemps, I decided to come here to have some help or at least a direction.
We will talk about dithering and color banding.
I will not try to explain here what are these, I will just assume that the reader already knows.
Ok, let's start with a picture
On this image, there is no dithering. Color bands are easy to see, if not on the wall at least on the floor. Because I am generating this image, I do have access to higher precision for each pixel during the generation.
If a channel is an integer from 0 to 255, I do have access to values likes 5.24 (the important thing is ".24" - this is what I mean by a "higher precision").
My original idea was to use a noise based on the pixel's position on screen. If we assert that the noise yield values between 0 and 1, for 5.24 the algorithm would looks like
if ( noise(x,y) < 0.24 ) color = 6
else color = 5
I was also using a "white noise" and I obtained this result
It's a lot smoother, but we get a very high noise. Plus, there is still color banding and this will be the problem I have.
By replacing the white noise by a "blue noise", I get this last image
Which is a lot better for the noisiness of the resulting image but ... there are still color bands.
I tried to figure out why.
I think it's because I use what I would call a "linear split" or "linear gradient" or "linear interpolation" ... or "linear" something, but it's linear.
It the formula I already gave
if ( noise(x,y) < 0.24 ) color = 6
else color = 5
If it's not 0.24 but 0.5, it means a 50/50 mix of (5,5,5) and (6,6,6). However, such and image will appear closer to (6,6,6) than to (5,5,5).
Somehow, I think that it's wrong to think that the step between (0,0,0) and (1,1,1) is the same than the step between (200,200,200) and (201,201,201).
I think it shouldn't be thought as an addition but as a ratio.
And the formula above doesn't really take the "5" into account (it will behave the same way for 5.24 and 200.24).
I don't know if I explain myself well, and it's already really late.
To summerize, I am looking for a formula taking into account
not only the ".24" part
not only the position of the pixel
but also the "5" part.
I think that there is an exponential somewhere (because there is always exponential in optic) and that the result will depend on the monitor (and what I am looking for is more related to cymk than to rgb).
But, still, is there something I can do ? Is there a name to this "non-linear increase of the value of the color" ?
Is there something I can ask to google ?
Thanks a lot.

Short-term position tracking with IMU

I have researched quite a bit about short-term position tracking with an IMU, and can't really seem to find anything on it. A lot of people say it's impossible to track position with an accelerometer, but all this is in the context of long term position tracking. I'm just looking for something less then a second.
I Googled around and found this video.
This shows doing it with an IMU, but when I take the double integral of the acceleration, its getting real messy. Any suggestions on how to approach this problem? Will a Kalman filter solve some of the issues?
The errors in position based on the double integrated accelerometer signal are related to:
Drift of bias in the accelerometer signal. A small error in the estimated bias will lead to a position error that is exploding very fast (double integration).
Gravity. Unless the orientation of the accelerometer is exactly perpendicular to the gravity, there will be a component of the gravity in the accelerometer signal.
Adding additional knowledge/measurements can help to reduce growth of your position error, e.g.:
Tracking the orientation of the accelerometer using gyro and/or magnetic sensor (9 DOF). If the orientation is known the gravity component in the accelerometer can be calculated and removed.
Detecting specific situations with known orientation or speed. In the case of the video, there could be a detection that the stone is flat on the board (vertical speed is zero, orientation is horizontal) or in one of the corners (position known, speed zero for some time).
This may be implemented using a Kalman filter.

Stop Camshift algorithm

I am using the CAMshift algorithm for my final year project but am stuck on a point. I am not able to terminate the algorithm automatically. Even after I remove the object from in front of the camera, the algorithm keeps tracking. I have heard about the termination criteria but don't know whether it is applicable here or not. Any help is appreciated. Thanks in advance.
Here's my code on github: code
Thanks
Ok I figured it out myself. The ROI in the actual image can be cropped so as to get a new image. Comparing the actual object size and the ROI size, if the ratio is beyond a specific value, we can determine that the object is lost.
In my case, the ratio is taken as 0.3

Position calculation of small model of a car using Accelerometer + Gyroscope

I wish to calculate position of a small remote controlled car (relative to starting position). The car moves on a flat surface for example: a room floor.
Now, I am using an accelerometer and a gyroscope. To be precise this board --> http://www.sparkfun.com/products/9623
As a first step I just took the accelerometer data in x and y axes (since car moves on a surface) and double integrated the data to get position. The formulae I used were:
vel_new = vel_old + ( acc_old + ( (acc_new - acc_old ) / 2.0 ) ) * SAMPLING_TIME;
pos_new = pos_old + ( vel_old + ( (vel_new - vel_old ) / 2.0 ) ) * SAMPLING_TIME;
vel_old = vel_new;
pos_old = pos_new;
acc_new = measured value from accelerometer
The above formulae are based on this document: http://perso-etis.ensea.fr/~pierandr/cours/M1_SIC/AN3397.pdf
However this is giving horrible error.
After reading other similar questions on this forum, I found out that I need to subtract the component of Gravity from above acceleration values (everytime from acc_new) by using gyroscope somehow. This idea is very well explained in Google Tech Talks video Sensor Fusion on Android Devices: A Revolution in Motion Processing at time 23:49.
Now my problem is how to subtract that gravity component?
I get angular velocity from gyroscope. How do I convert it into acceleration so that I can subtract it from the output of accelerometer?
It won't work, these sensors are not accurate enough to calculate the position.
The reason is also explained in the video you are referring to.
The best you could do is to get the velocity based on the rpm of the wheels. If you also know the heading that belongs to the velocity, you can integrate the velocity to get position. Far from perfect but it should give you a reasonable estimate of the position.
I do not know how you could get the heading of the car, it depends on the hardware you have.
I'm afraid Ali's answer is quite true when it comes to those devices. However why don't you try searching arduino dead reckoning which will cover stories of people trying similar boards?
Here's a link that appeared after a search that I think may help you:
Converting IMU to INS
Even it seems like all of them failed you may come across workarounds which will decrease errors to acceptable amounts or calibrate your algorithm with some other sensor to put it back in track as the squared error of acceleration along with gyros white noise destroying accuracy.
One reason you have a huge error is that the equation appears to be wrong.
Example: To get updated vel,use.. Velnew=velold+((accelold+accelnew)/2)*sampletime.
Looks like you had an extra accel term in the equation. Using this alone will not correct all the error....need to as you say correct for influence of gravity and other things.

Resources