My program receives input consists of line segments and expand the lines to cylinder-like object (like the PipeGS project in DX SDK sample browser).
I added an array of radius scaling parameter for the pipes, and modify them procedurally, but the radii of pipes just didn't change.
I'm pretty sure the scaling parameters are updated every frame because I set them as the pixel value. When I modify them, the pipes change color while their radii keep unchanged.
So I am wondering if there's any limitation of using global variables in GS and I didn't find it on the internet. (or just wrong keywords I used)
The shader code is like
cbuffer {
.....
float scaleParam[10];
.....
}
// Pass 1
VS_1 { // pass through }
// Tessellation stages
// Hull shader, domain shader and patch constant function
GS_1 {
pipeRadius = MaxRadius * scaleParam[PipeID];
....
// calculate pipe positions base on line-segments and pipeRadius
....
OutputStream.Append ( ... );
}
// Pixel shader is disabled in the first pass
// Pass 2
VS_2 { // pass through }
// Tessellation stages
// Hull shader, domain shader and patch constant function
// Transform the vertices and normals to world coordinate in DS
// No geometry shader in the second pass
PS_2
{
return float4( scaleParam[0], scaleParam[1], scaleParam[2], 0.0f );
}
Edit:
I shrinked the problem.
There are 2 passes in my program, in the first pass I calculate the line-segment expanding in Geometry Shader and stream-out.
In the second pass, the program receives pipe position from the first pass, tessellate the pipes and apply displacement mapping on them so they can be more detailed.
I can change the surface tessellation factor and pixel color which are in the second pass and see the result on screen immediately.
When I modify the scaleParam, the pipes change color while their radii keep unchanged. It means I did change the scaleParam and pass them into shader correctly but something's wrong in the first pass.
Second edit:
I modified shader code above and post some code of the cpp file here.
In the cpp file:
void DrawScene()
{
// Update view matrix, TessFactor, scaleParam etc.
....
....
// Bind stream-output buffer
ID3D11Buffer* bufferArray[1] = {mStreamOutBuffer};
md3dImmediateContext->SOSetTargets(1, bufferArray, 0);
// Two pass rendering
D3DX11_TECHNIQUE_DESC techDesc;
mTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
mTech->GetPassByIndex(p)->Apply(0, md3dImmediateContext);
// First pass
if (p==0)
{
md3dImmediateContext->IASetVertexBuffers(0, 1,
&mVertexBuffer, &stride, &offset);
md3dImmediateContext->Draw(mVertexCount,0);
// unbind stream-output buffer
bufferArray[0] = NULL;
md3dImmediateContext->SOSetTargets( 1, bufferArray, 0 );
}
// Second pass
else
{
md3dImmediateContext->IASetVertexBuffers(0, 1,
&mStreamOutBuffer, &stride, &offset);
md3dImmediateContext->DrawAuto();
}
}
HR(mSwapChain->Present(0, 0));
}
Check if you are using a float4 position, the w value of vector is a scale for final position in scene, example:
float4 pos0 = float4(5, 5, 5, 1);
// is equals that:
float4 pos1 = float4(10, 10, 10, 2);
To correct scale a position you must changue only the .xyz value of vector position.
I solved this problem by rebuilding Vertex buffer and Stream-output buffer everytime after I modified the parameters, but I still don't know what exactly causes this problem.
Related
Hi I been studying webgl.
I been reading this book called Real-Time 3D Graphics with WebGL 2 and here it says this Vertex array objects allows us to store all of the vertex/index binding information for a set of buffers in a single, easy to manage object.
And it provides this example for VAO.
function initBuffers() {
/*
V0 V3
(-0.5, 0.5, 0) (0.5, 0.5, 0)
X---------------------X
| |
| |
| (0, 0) |
| |
| |
X---------------------X
V1 V2
(-0.5, -0.5, 0) (0.5, -0.5, 0)
*/
const vertices = [
-0.5, 0.5, 0,
-0.5, -0.5, 0,
0.5, -0.5, 0,
0.5, 0.5, 0
];
// Indices defined in counter-clockwise order
indices = [0, 1, 2, 0, 2, 3];
// Create VAO instance
squareVAO = gl.createVertexArray();
// Bind it so we can work on it
gl.bindVertexArray(squareVAO);
const squareVertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
// Provide instructions for VAO to use data later in draw
gl.enableVertexAttribArray(program.aVertexPosition);
gl.vertexAttribPointer(program.aVertexPosition, 3, gl.FLOAT, false, 0, 0);
// Setting up the IBO
squareIndexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, squareIndexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STATIC_DRAW);
// Clean
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
}
// We call draw to render to our canvas
function draw() {
// Clear the scene
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
// Bind the VAO
gl.bindVertexArray(squareVAO);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, squareIndexBuffer);
// Draw to the scene using triangle primitives
gl.drawElements(gl.TRIANGLES, indices.length, gl.UNSIGNED_SHORT, 0);
// Clean
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
}
// Entry point to our application
function init() {
// Retrieve the canvas
const canvas = utils.getCanvas('webgl-canvas');
// Set the canvas to the size of the screen
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
// Retrieve a WebGL context
gl = utils.getGLContext(canvas);
// Set the clear color to be black
gl.clearColor(0, 0, 0, 1);
// Call the functions in an appropriate order
initProgram();
initBuffers();
draw();
}
The question here is, do we need gl.bindBuffer(); after we bind the VAO in draw()?
I looked at this link What are Vertex Arrays in OpenGL & WebGL2? and it says
At draw time it then only takes one call to gl.bindVertexArray to setup all the attributes and the ELEMENT_ARRAY_BUFFER. So I suppose there is no need for the gl.bindBuffer(); after we bind the VAO in draw()?
Is the code from the textbook misleading?
No you do not need to rebind the buffer
The ELEMENT_ARRAY_BUFFER binding is part of the current vertex array state as the answer you linked to points out.
These lines in your example are also irrelevant
In initBuffers
// Clean
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
None of these lines are truly needed. Only the first line has any real point even if it is not needed
This line
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
does nothing really because as stated above ELEMENT_ARRAY_BUFFER state is part of the current vertex array so just changing the current vertex array with gl.bindVertexArray already changed that binding.
This line
gl.bindBuffer(gl.ELEMENT_BUFFER, null);
has no point really because AFAIK almost no programs ever just assume the current ARRAY_BUFFER binding is set to anything. They always bind a buffer before operating on it. It's not bad to have it and I'm sure you could find some convoluted way to make it important but in real life I haven't seen one.
This line does have a point.
gl.bindVertexArray(null);
It is common to setup vertex buffers separate from vertex attributes. If you are making one vertex array per thing to draw and your pattern is like this
// at init time
for each thing I plan to draw
(1) create buffers and fill with positions/normals/texcoords/indices
(2) create/bind vertex array
(3) setup attributes and ELEMENT_ARRAY_BUFFER
Then if you don't bind null after step 3, step 1 will end up changing the ELEMENT_ARRAY_BUFFER binding of the previously bound vertex array
In other words maybe this line
gl.bindVertexArray(null);
Has a point. Still, it's arguable. If you swapped steps 1 and 2 and changed your initialization to
// at init time
for each thing I plan to draw
(1) create/bind vertex array
(2) create buffers and fill with positions/normals/texcoords/indices
(3) setup attributes
Then the problem goes away
Those same 3 lines exist in draw
// Clean
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
Where again they have no point
After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer.
My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn.
I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.
Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.
I didn't get any reply to my earlier question; hence, the repost.
Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.
public class MagicRenderer : MonoBehaviour {
public Shader particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat; // material that uses a simple blend shader
public int downSampleFactor = 1;
private RenderTexture particleRT;
private static GameObject pCam;
void Awake () {
// make the main cameras depth buffer available to the shaders via _CameraDepthTexture
camera.depthTextureMode = DepthTextureMode.Depth;
}
// Update is called once per frame
void Update () {
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// create tmp RT
particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
particleRT.antiAliasing = 1;
// create particle cam
Camera pCam = GetPCam ();
pCam.CopyFrom (camera);
pCam.clearFlags = CameraClearFlags.SolidColor;
pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.useOcclusionCulling = false;
pCam.targetTexture = particleRT;
pCam.depth = 0;
// Draw to particleRT's colorBuffer using mainCam's depth buffer
// ?? - how do i transfer this camera's depth buffer to pCam?
pCam.Render ();
// pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.
// blend mainCam's colorBuffer with particleRT's colorBuffer
// Graphics.Blit(pCam.targetTexture, src, blendMat);
// copy resulting buffer to destination
Graphics.Blit (pCam.targetTexture, dest);
// clean up
RenderTexture.ReleaseTemporary(particleRT);
}
static public Camera GetPCam() {
if (!pCam) {
GameObject oldpcam = GameObject.Find("pCam");
Debug.Log (oldpcam);
if (oldpcam) Destroy(oldpcam);
pCam = new GameObject("pCam");
pCam.AddComponent<Camera>();
pCam.camera.enabled = false;
pCam.hideFlags = HideFlags.DontSave;
}
return pCam.camera;
}
}
I've a few additional questions:
1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called:
(i) Z-PrePass, that only writes to the depth buffer
(ii) Color pass, that writes to both the color and depth buffer.
2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;
and rendered them using pCam.Render().
I thought this would render the particles using their existing shaders with the ZTest.
Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).
Why does this happen?
It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.
Copying the depth
First, you have to get the scene depth in the resolution of your smaller render texture.
This can be done by creating a new render texture with the color format "depth".
To write the scene depth to the low resolution depth, create a shader that just outputs the depth:
struct fragOut{
float depth : DEPTH;
};
sampler2D _LastCameraDepthTexture;
fragOut frag (v2f i){
fragOut tOut;
tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
return tOut;
}
_LastCameraDepthTexture is automatically filled by Unity, but there is a downside.
It only comes for free if the main camera renders with deferred rendering.
For forward shading, Unity seems to render the scene again just for the depth texture.
Check the frame debugger.
Then, add a post processing effect to the main camera that executes the shader:
protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
Graphics.Blit(pFrom, pTo);
}
You can probably do this without the second blit, but it was easier for me for testing.
Using the copied depth for rendering
To use the new depth texture for your second camera, call
mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);
Keep targetTexture empty.
You then must ensure the second camera does not clear the depth, only the color.
For this, disable clear on the second camera completely and clear manually like this
Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);
I recommend to also render the second camera by hand. Disable it and call
mSecondCamera.Render();
after clearing.
Merging
Now you have to merge the main view and the seperate layer.
Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.
To mix this with the rest, use a post processing step on the main camera with
fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);
//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;
Additive materials work out of the box, but alpha blended do not.
You have to create a shader with custom blending to create working alpha blended materials. The blending is
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
This changes how the alpha channel is modified for every performed blending.
Results
add blended in front of alpha blended
fx layer rgb
fx layer alpha
alpha blended in front of add blended
fx layer rgb
fx layer alpha
I did not test yet if the performance actually increases.
If anyone has a simpler solution, let me know please.
I managed to reuse camera Z-buffer "manually" in the shader used for rendering. See http://forum.unity3d.com/threads/reuse-depth-buffer-of-main-camera.280460/ for more.
Just alter the particle shader you use already for particle rendering.
I am trying to implement box select in a 3d world. Basically, click, hold mouse, and then unpress mouse, get a box, and then box select. To start, I'm trying to figure out how to get the coordinates of the clicks in 3d.
I have raypicking, and that is not getting the right coordinate (gets origin and direction). It keeps returning the same origin no matter what X/Y for screen is (although the direction is different).
I've also tried:
D3DXVECTOR3 ori = D3DXVECTOR3(sx, sy, 0.0f);
D3DXVECTOR3 out;
D3DXVec3Unproject(&out, &ori, &viewPort, &projectionMat, &viewMat, &worldMat);
And it gets the same thing, the coordinates are very close to each other no matter what coordinates (and are wrong). It's almost like returning the eye, instead of the actual world coordinate.
How do I turn 2d Screen coordinates into 3d using directx 9c?
This is called picking in Direct3D, to select a model in 3D space, you mainly need 3 steps:
Generate the picking ray
Transform the picking ray and the model you want to pick in the same space
Do a intersection test of the picking ray and the model
Generate the picking ray
When we click the mouse on the screen(say the point is s on the screen), the model is selected when the box project on the area surround the point s on the projection window.
so, in order to generate the picking ray with the given screen coordinates (x, y), first we need to transform (x,y) to the projection window, this is can be done by the invert process of viewport transformation. another thing is the point on the projection window was scaled by the project matrix, so we should divide it by the scale factors.
in DirectX, the camera always place at the origin, so the picking ray starts from the origin, and projection window is the near clip plane(z=1).this is what the code has done below.
Ray CalcPickingRay(LPDIRECT3DDEVICE9 Device, int screen_x, int screen_y)
{
float px = 0.0f;
float py = 0.0f;
// Get viewport
D3DVIEWPORT9 vp;
Device->GetViewport(&vp);
// Get Projection matrix
D3DXMATRIX proj;
Device->GetTransform(D3DTS_PROJECTION, &proj);
px = ((( 2.0f * screen_x) / vp.Width) - 1.0f) / proj(0, 0);
py = (((-2.0f * screen_y) / vp.Height) + 1.0f) / proj(1, 1);
Ray ray;
ray._origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
ray._direction = D3DXVECTOR3(px, py, 1.0f);
return ray;
}
Transform the picking ray and model into the same space.
We always obtain this by transform the picking ray to world space, simply get the invert of your view matrix, then apply the invert matrix to your pickig ray.
// transform the ray from view space to world space
void TransformRay(Ray* ray, D3DXMATRIX* invertViewMatrix)
{
// transform the ray's origin, w = 1.
D3DXVec3TransformCoord(
&ray->_origin,
&ray->_origin,
invertViewMatrix);
// transform the ray's direction, w = 0.
D3DXVec3TransformNormal(
&ray->_direction,
&ray->_direction,
invertViewMatrix);
// normalize the direction
D3DXVec3Normalize(&ray->_direction, &ray->_direction);
}
Do intersection test
If everything above is well, you can do the intersection test now. this is a ray-box intersection, so you can use function D3DXboxBoundProbe. you can change the visual mode of you box to see if the picking was really work, for example, set the fill mode to solid or wire-frame if D3DXboxBoundProbe return TRUE.
You can perform the picking in response of WM_LBUTTONDOWN.
case WM_LBUTTONDOWN:
{
// Get screen point
int iMouseX = (short)LOWORD(lParam) ;
int iMouseY = (short)HIWORD(lParam) ;
// Calculate the picking ray
Ray ray = CalcPickingRay(g_pd3dDevice, iMouseX, iMouseY) ;
// transform the ray from view space to world space
// get view matrix
D3DXMATRIX view;
g_pd3dDevice->GetTransform(D3DTS_VIEW, &view);
// inverse it
D3DXMATRIX viewInverse;
D3DXMatrixInverse(&viewInverse, 0, &view);
// apply on the ray
TransformRay(&ray, &viewInverse) ;
// collision detection
D3DXVECTOR3 v(0.0f, 0.0f, 0.0f);
if(D3DXSphereBoundProbe(box.minPoint, box.maxPoint &ray._origin, &ray._direction))
{
g_pd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
}
break ;
}
It turns out, I was handling the problem the wrong/opposite way. Turning 2D to 3D didn't make sense in the end. But as it turns out, converting the vertices from 3D to 2D, then seeing if inside the 2D box was the right answer!
This can probably be filed under premature optimization, but since the vertex shader executes on each vertex for each frame, it seems like something worth doing (I have a lot of vars I need to multiply before going to the pixel shader).
Essentially, the vertex shader performs this operation to convert a vector to projected space, like so:
// Transform the vertex position into projected space.
pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
output.pos = pos;
Since I'm doing this operation to multiple vectors in the shader, it made sense to combine those matrices into a cumulative matrix on the CPU and then flush that to the GPU for calculation, like so:
// VertexShader.hlsl
cbuffer ModelViewProjectionConstantBuffer : register (b0)
{
matrix model;
matrix view;
matrix projection;
matrix cummulative;
float3 eyePosition;
};
...
// Transform the vertex position into projected space.
pos = mul(pos, cummulative);
output.pos = pos;
And on the CPU:
// Renderer.cpp
// now is also the time to update the cummulative matrix
m_constantMatrixBufferData->cummulative =
m_constantMatrixBufferData->model *
m_constantMatrixBufferData->view *
m_constantMatrixBufferData->projection;
// NOTE: each of the above vars is an XMMATRIX
My intuition was that there was some mismatch of row-major/column-major, but XMMATRIX is a row-major struct (and all of its operators treat it as such) and mul(...) interprets its matrix parameter as row-major. So that doesn't seem to be the problem but perhaps it still is in a way that I don't understand.
I've also checked the contents of the cumulative matrix and they appear correct, further adding to the confusion.
Thanks for reading, I'll really appreciate any hints you can give me.
EDIT (additional information requested in comments):
This is the struct that I am using as my matrix constant buffer:
// a constant buffer that contains the 3 matrices needed to
// transform points so that they're rendered correctly
struct ModelViewProjectionConstantBuffer
{
DirectX::XMMATRIX model;
DirectX::XMMATRIX view;
DirectX::XMMATRIX projection;
DirectX::XMMATRIX cummulative;
DirectX::XMFLOAT3 eyePosition;
// and padding to make the size divisible by 16
float padding;
};
I create the matrix stack in CreateDeviceResources (along with my other constant buffers), like so:
void ModelRenderer::CreateDeviceResources()
{
Direct3DBase::CreateDeviceResources();
// Let's take this moment to create some constant buffers
... // creation of other constant buffers
// and lastly, the all mighty matrix buffer
CD3D11_BUFFER_DESC constantMatrixBufferDesc(sizeof(ModelViewProjectionConstantBuffer), D3D11_BIND_CONSTANT_BUFFER);
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&constantMatrixBufferDesc,
nullptr,
&m_constantMatrixBuffer
)
);
... // and the rest of the initialization (reading in the shaders, loading assets, etc)
}
I write into the matrix buffer inside a matrix stack class I created. The client of the class calls Update() once they are done modifying the matrices:
void MatrixStack::Update()
{
// then update the buffers
m_constantMatrixBufferData->model = model.front();
m_constantMatrixBufferData->view = view.front();
m_constantMatrixBufferData->projection = projection.front();
// NOTE: the eye position has no stack, as it's kept updated by the trackball
// now is also the time to update the cummulative matrix
m_constantMatrixBufferData->cummulative =
m_constantMatrixBufferData->model *
m_constantMatrixBufferData->view *
m_constantMatrixBufferData->projection;
// and flush
m_d3dContext->UpdateSubresource(
m_constantMatrixBuffer.Get(),
0,
NULL,
m_constantMatrixBufferData,
0,
0
);
}
Given your code snippets, it should work.
Possible causes of your problem:
Have you tried the inverted multiplication: projection * view * model?
Are you sure you set correctly the cummulative (register index = 12, offset = 192) in your constant buffer
Same for eyePosition (register index = 12, offset = 256)?
I am making an object move in it's update() and turn left right up down according to user input. All I want is to make a spotlight follow the object.
Object's Rotation: 0,180,0
SpotLight's Rotation: 90,0,0
Since the rotations are different( and they need to be like that), I cannot make the light follow the object.
code :
function Update () {
SetControl(); // Input Stuff...
transform.Translate (0, 0, objectSpeed*Time.deltaTime);
lightView.transform.eulerAngles=this.transform.eulerAngles;
lightView.transform.Rotate=this.transform.eulerAngles;
lightView.transform.Translate(snakeSpeed*Time.deltaTime,0, 0); //THIS IS INCORRECT
}
lightView is simply pointing to the SpotLight.
What your looking for is the Unity method Transform.lookAt.
Place the following script on the spotlight. This code will make the object it is attached to, look at another object.
// Drag another object onto it to make the camera look at it.
var target : Transform;
// Rotate the camera every frame so it keeps looking at the target
function Update() {
transform.LookAt(target);
}
All I want is to make a spotlight follow the object.
This is a two-step process. First, find the coordinate position (in world coordinates) of your target. Second, apply that position plus an offset to your spotlight. Since your light is rotated 90° along the x-axis, I assume your light is above and looking down.
var offset = new Vector3(0, 5, 0);
function Update()
{
// Move this object
transform.Translate (0, 0, objectSpeed*Time.deltaTime);
// Move the light to transform's position + offset.
// Note that the light's rotation has already been set and does
// not need to be re-set each frame.
lightView.transform.position = transform.position + offset;
}
If you want a smoother "following" action, do a linear interpolation over time. Replace
lightView.transform.position = transform.position + offset;
with
lightView.transform.position = Vector3.Lerp(lightView.transform.position, transform.position + offset, Time.deltaTime * smoothingFactor);
where smoothingFactor is any float.
As an aside, it is near death to call transform.* in any kind of recurring game loop, because GameObject.transform is actually a get property that does a component search. Most Unity documentation recommends you cache the transform variable first.
Better code:
var myTrans = transform; // Cache the transform
var lightTrans = lightView.transform;
var offset = new Vector3(0, 5, 0);
function Update()
{
// Move this object
myTrans.Translate (0, 0, objectSpeed*Time.deltaTime);
// Move the light to transform's position + offset.
// Note that the light's rotation has already been set and does
// not need to be re-set each frame.
lightTrans.position = myTrans.position + offset;
}