I've been looking for a way of which I can adjust the size of a 2D game object, depending on the distance it is to the origin. When the game object has a distance of 4 units, it should have a local scale of (1,1,1). When it reaches the origin, it should have a local scale of (0,0,1). This should give the illusion that the game object is getting further away. If anybody knows how to achieve this it would be much appreciated for you to let me know.
Thanks in Advance,
Tommy
You could use linear interpolation on the scale value. Take the [Vector3][1] from the transforms local scale and pass in the distance from the origin.
To show some pseudo code of what I'm talking about:
get the transform
in the update figure out the distance from the origin
get the lerped value (Vector3.lerp(new Vector3(1,1,1), new Vector3(0,0,1), distance from center))
Added example code
public class Scaling : MonoBehaviour
{
private Transform trans;
void Start ()
{
trans = gameObject.transform;
}
void Update ()
{
float dist = Vector3.Distance(Vector3.zero, transform.position);
//don't scale if further away than 4 units
if(dist > 4)
{
transform.localScale = Vector3.forward;
return;
}
//work out the new scale
Vector3 newScale = Vector3.Lerp(Vector3.one, Vector3.forward, dist / 4);
transform.localScale = newScale;
}
}
Related
Currently developing an augmented reality application and I need to change the size of my objects using pinch zoom. Found the code below on the net but it is not working. After adding the script, my object went missing and whenever I pinch it, it just appears and then it goes missing again. And it is very small like only a dot in the screen. What is the possible reason? Thanks!
public static GameObject selectedObject;
//public GameObject gameobject;
// Update is called once per frame
void Update () {
if ( Input.touchCount == 0 )
{
Touch touch = Input.touches[0];
Ray ray = Camera.main.ScreenPointToRay(touch.position);
RaycastHit hit;
if ( Physics.Raycast(ray, out hit, 100f ) )
{
selectedObject = hit.collider.gameObject;
}
}
if (Input.touchCount == 2)
{
// Store both touches.
Touch touchZero = Input.GetTouch(0);
Touch touchOne = Input.GetTouch(1);
// Find the position in the previous frame of each touch.
Vector2 touchZeroPrevPos = touchZero.position - touchZero.deltaPosition;
Vector2 touchOnePrevPos = touchOne.position - touchOne.deltaPosition;
// Find the magnitude of the vector (the distance) between the touches in each frame.
float prevTouchDeltaMag = (touchZeroPrevPos - touchOnePrevPos).magnitude;
float touchDeltaMag = (touchZero.position - touchOne.position).magnitude;
// Find the difference in the distances between each frame.
float deltaMagnitudeDiff = prevTouchDeltaMag - touchDeltaMag;
selectedObject.transform.localScale = new Vector3(deltaMagnitudeDiff , deltaMagnitudeDiff , deltaMagnitudeDiff);
}
}
The issue with setting local scale using a delta is that you aren't changing the scale of the object in the way you believe you are. You're setting the scale to the delta in each axis, which may be a very very small number.
The reason why your object would disappear when the user was not scaling the object was because during that time, the value of deltaMagnitudeDiff was 0, so you were scaling your box by a factor of 0 in every direction (which shrinks it to a single point, its location). When a user was scaling the box, the box would only be as large as deltaMagnitudeDiff. So, moving your fingers faster would probably make the box appear larger than moving your fasters slowly. Once the user stopped scaling, deltaMagnitudeDiff would again be 0, since the position of the user's fingers were not scaling.
You should add your deltaMagnitudeDiff to the current local scale of the object.
Here is a modification of the last two lines of your Update() method, including the comment directly above the second to last line:
// Find the difference in distances between each frame.
float deltaMagnitudeDiff = prevTouchDeltaMag - touchDeltaMag;
Vector3 newScale = selectedObject.transform.localScale - new Vector3(deltaMagnitudeDiff, deltaMagnitudeDiff, deltaMagnitudeDiff);
selectedObject.transform.localScale = newScale;
Final script:
public static GameObject selectedObject;
//public GameObject gameobject;
// Update is called once per frame
void Update () {
if ( Input.touchCount == 0 )
{
Touch touch = Input.touches[0];
Ray ray = Camera.main.ScreenPointToRay(touch.position);
RaycastHit hit;
if ( Physics.Raycast(ray, out hit, 100f ) )
{
selectedObject = hit.collider.gameObject;
}
}
if (Input.touchCount == 2)
{
// Store both touches.
Touch touchZero = Input.GetTouch(0);
Touch touchOne = Input.GetTouch(1);
// Find the position in the previous frame of each touch.
Vector2 touchZeroPrevPos = touchZero.position - touchZero.deltaPosition;
Vector2 touchOnePrevPos = touchOne.position - touchOne.deltaPosition;
// Find the magnitude of the vector (the distance) between the touches in each frame.
float prevTouchDeltaMag = (touchZeroPrevPos - touchOnePrevPos).magnitude;
float touchDeltaMag = (touchZero.position - touchOne.position).magnitude;
// Find the difference in distances between each frame.
float deltaMagnitudeDiff = prevTouchDeltaMag - touchDeltaMag;
Vector3 newScale = selectedObject.transform.localScale - new Vector3(deltaMagnitudeDiff, deltaMagnitudeDiff, deltaMagnitudeDiff);
selectedObject.transform.localScale = newScale;
}
}
I'm trying to get a mobile robot to map an arena based on what it can see from a camera. I've created a map, and managed to get the robot to identify items placed in the arena and give an estimated location, however, as I'm only using an RGB camera the resulting numbers can vary slightly ever frame due to noise, or change in lighting, etc. What am now trying to do is create a probability map using Bayes' formula to give a better map of the arena.
Bayes' Formula
P(i | x) = (p(i)p(x|i))/(sum(p(j)(p(x|j))
This is what I've got so far. All points on the map are initialised to 0.5.
// Gets the Likely hood of the event being correct
// Para 1 = Is the object likely to be at that location
// Para 2 = is the sensor saying it's at that location
private double getProbabilityNum(bool world, bool sensor)
{
if (world && sensor)
{
// number to test the function works
return 0.6;
}
else if (world && !sensor)
{
// number to test the function works
return 0.4;
}
else if (!world && sensor)
{
// number to test the function works
return 0.2;
}
else //if (!world && !sensor)
{
// number to test the function works
return 0.8;
}
}
// A function to update the map's probability of an object being at location (x,y)
// Para 3 = does the sensor pick up the an object at (x,y)
public double probabilisticMap(int x,int y,bool sensor)
{
// gets the current likelihood from the map (prior Probability)
double mapProb = get(x,y);
//decide if object is at location (x,y)
bool world = (mapProb < threshold);
//Bayes' formula to update the probability
double newProb =
(getProbabilityNum(world, sensor) * mapProb) / ((getProbabilityNum(world, sensor) * mapProb) + (getProbabilityNum(!world, sensor) * (1 - mapProb)));
// update the location on the map
set(x,y,newProb);
// return the probability as well
return newProb;
}
It does work, but the numbers seem to jump rapidly, and then flicker when they are at the top, it also errors if the numbers drop too near to zero. Anyone have any idea why this might be happening? I think it's something to do with the way the equations is coded, but I'm not too sure. (I found this, but I don't quite understand it, so I'm not sure of it's relevents, but it seems to be talking about the same thing
Thanks in Advance.
Use log-likelihoods when doing numerical computations involving probabilities.
Consider
P(i | x) = (p(i)p(x|i))/(sum(p(j)(p(x|j)).
Because x is fixed, the denominator, p(x), is a constant. Thus
P(i | x) ~ p(i)p(x|i)
where ~ denotes "is proportional to."
The log-likelihood function is just the log of this. That is,
L(i | x) = log(p(i)) + log(p(x|i)).
I am new to unity3d and I want to build a helix shape road for my game
How can I create a downward helix road ?is there any tutorial for it?
please clarify the steps to create road in general in unity3d if you can?
I would create a single 360° part of that road HelixPart in Blender or whatever your favourite modelling tool is. Then import it into Unity.
Create an empty GameObject and attach a let's call it HelixRoadCreator Component:
public class HelixRoadCreator : MonoBehaviour {
public int noOfParts = 5;
public GameObject helixPartPrefab;
void Start ()
Vector3 newPosition = transform.position;
for (int i = 1; i <= noOfParts; i++) {
GameObject part = Instantiate (helixPartPrefab, newPosition, transform.rotation) as GameObject;
part.tranform.parent = this.gameObject.transform;
// recalculate newPosition, something like: newPosition += i* heightOfPart
}
}
}
Now drag your imported HelixPart on helixPartPrefab in HelixRoadCreator and adjust noOfParts to the number you want. Fiddle around to find out the right height calculation parameters et voilá.
I have a heightmap. I want to efficiently compute which tiles in it are visible from an eye at any given location and height.
This paper suggests that heightmaps outperform turning the terrain into some kind of mesh, but they sample the grid using Bresenhams.
If I were to adopt that, I'd have to do a line-of-sight Bresenham's line for each and every tile on the map. It occurs to me that it ought to be possible to reuse most of the calculations and compute the heightmap in a single pass if you fill outwards away from the eye - a scanline fill kind of approach perhaps?
But the logic escapes me. What would the logic be?
Here is a heightmap with a the visibility from a particular vantagepoint (green cube) ("viewshed" as in "watershed"?) painted over it:
Here is the O(n) sweep that I came up with; I seems the same as that given in the paper in the answer below How to compute the visible area based on a heightmap? Franklin and Ray's method, only in this case I am walking from eye outwards instead of walking the perimeter doing a bresenhams towards the centre; to my mind, my approach would have much better caching behaviour - i.e. be faster - and use less memory since it doesn't have to track the vector for each tile, only remember a scanline's worth:
typedef std::vector<float> visbuf_t;
inline void map::_visibility_scan(const visbuf_t& in,visbuf_t& out,const vec_t& eye,int start_x,int stop_x,int y,int prev_y) {
const int xdir = (start_x < stop_x)? 1: -1;
for(int x=start_x; x!=stop_x; x+=xdir) {
const int x_diff = abs(eye.x-x), y_diff = abs(eye.z-y);
const bool horiz = (x_diff >= y_diff);
const int x_step = horiz? 1: x_diff/y_diff;
const int in_x = x-x_step*xdir; // where in the in buffer would we get the inner value?
const float outer_d = vec2_t(x,y).distance(vec2_t(eye.x,eye.z));
const float inner_d = vec2_t(in_x,horiz? y: prev_y).distance(vec2_t(eye.x,eye.z));
const float inner = (horiz? out: in).at(in_x)*(outer_d/inner_d); // get the inner value, scaling by distance
const float outer = height_at(x,y)-eye.y; // height we are at right now in the map, eye-relative
if(inner <= outer) {
out.at(x) = outer;
vis.at(y*width+x) = VISIBLE;
} else {
out.at(x) = inner;
vis.at(y*width+x) = NOT_VISIBLE;
}
}
}
void map::visibility_add(const vec_t& eye) {
const float BASE = -10000; // represents a downward vector that would always be visible
visbuf_t scan_0, scan_out, scan_in;
scan_0.resize(width);
vis[eye.z*width+eye.x-1] = vis[eye.z*width+eye.x] = vis[eye.z*width+eye.x+1] = VISIBLE;
scan_0.at(eye.x) = BASE;
scan_0.at(eye.x-1) = BASE;
scan_0.at(eye.x+1) = BASE;
_visibility_scan(scan_0,scan_0,eye,eye.x+2,width,eye.z,eye.z);
_visibility_scan(scan_0,scan_0,eye,eye.x-2,-1,eye.z,eye.z);
scan_out = scan_0;
for(int y=eye.z+1; y<height; y++) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y-1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y-1);
}
scan_out = scan_0;
for(int y=eye.z-1; y>=0; y--) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y+1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y+1);
}
}
Is it a valid approach?
it is using centre-points rather than looking at the slope between the 'inner' pixel and its neighbour on the side that the LoS passes
could the trig in to scale the vectors and such be replaced by factor multiplication?
it could use an array of bytes since the heights are themselves bytes
its not a radial sweep, its doing a whole scanline at a time but away from the point; it only uses only a couple of scanlines-worth of additional memory which is neat
if it works, you could imagine that you could distribute it nicely using a radial sweep of blocks; you have to compute the centre-most tile first, but then you can distribute all immediately adjacent tiles from that (they just need to be given the edge-most intermediate values) and then in turn more and more parallelism.
So how to most efficiently calculate this viewshed?
What you want is called a sweep algorithm. Basically you cast rays (Bresenham's) to each of the perimeter cells, but keep track of the horizon as you go and mark any cells you pass on the way as being visible or invisible (and update the ray's horizon if visible). This gets you down from the O(n^3) of the naive approach (testing each cell of an nxn DEM individually) to O(n^2).
More detailed description of the algorithm in section 5.1 of this paper (which you might also find interesting for other reasons if you aspire to work with really enormous heightmaps).
I'm using the ShowTextAtPoint method of CGContext to display a Text in a view, but it is displayed in flip mode, anyone knows how to solve this problem ?
Here is the code I use :
ctx.SelectFont("Arial", 16f, CGTextEncoding.MacRoman);
ctx.SetRGBFillColor(0f, 0f, 1f, 1f);
ctx.SetTextDrawingMode(CGTextDrawingMode.Fill);
ctx.ShowTextAtPoint(centerX, centerY, text);
You can manipulate the current transformation matrix on the graphics context to flip it using ScaleCTM and TranslateCTM.
According to the Quartz 2D Programming Guide - Text:
In iOS, you must apply a transform to the current graphics context in order for the text to be oriented as shown in Figure 16-1. This transform inverts the y-axis and translates the origin point to the bottom of the screen. Listing 16-2 shows you how to apply such transformations in the drawRect: method of an iOS view. This method then calls the same MyDrawText method from Listing 16-1 to achieve the same results.
The way this looks in MonoTouch:
public void DrawText(string text, float x, float y)
{
// the incomming coordinates are origin top left
y = Bounds.Height-y;
// push context
CGContext c = UIGraphics.GetCurrentContext();
c.SaveState();
// This technique requires inversion of the screen coordinates
// for ShowTextAtPoint
c.TranslateCTM(0, Bounds.Height);
c.ScaleCTM(1,-1);
// for debug purposes, draw crosshairs at the proper location
DrawMarker(x,y);
// Set the font drawing parameters
c.SelectFont("Helvetica-Bold", 12.0f, CGTextEncoding.MacRoman);
c.SetTextDrawingMode(CGTextDrawingMode.Fill);
c.SetFillColor(1,1,1,1);
// Draw the text
c.ShowTextAtPoint( x, y, text );
// Restore context
c.RestoreState();
}
A small utility function to draw crosshairs at the desired point:
public void DrawMarker(float x, float y)
{
float SZ = 20;
CGContext c = UIGraphics.GetCurrentContext();
c.BeginPath();
c.AddLines( new [] { new PointF(x-SZ,y), new PointF(x+SZ,y) });
c.AddLines( new [] { new PointF(x,y-SZ), new PointF(x,y+SZ) });
c.StrokePath();
}