I've recently started learning HLSL after deciding that I wanted better lighting than what BasicEffect offered. After going through many tutorials, I found this and decided to learn from it:
http://brooknovak.wordpress.com/2008/11/13/hlsl-per-pixel-point-light-using-phong-blinn-lighting-model/
It seems that the shader above doesn't work very well in my game though, because my game uses a tile based approach, which means multiple models in a grid-like formation.
What happens is that each of my tiles gets shaded separately from the others. Please see this image for a visual reference:
http://i.imgur.com/1Sfi2.png
I understand that this is because each tile has it's own model and the shader doesn't take into account other models as it's executing on the meshes of a model.
Now, for the question. How does one go about to shade all the tiles together? I understand that I may have to write a shader from scratch to accomplish this, but if anyone could give me some tips on how to achieve the effect I want, I'd really appreciate it.
It's late so there's a possibility that I've forgotten something. If you need more information, please tell me and I'll add it.
Thanks, Merigrim
EDIT:
Here is my code for drawing a model:
public void DrawModel(Model model, Matrix modelTransform, Matrix[] absoluteBoneTransforms, Vector3 color, float alpha = 1.0f, Texture2D texture = null)
{
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
part.Effect = effect;
Matrix world = absoluteBoneTransforms[mesh.ParentBone.Index] * modelTransform;
effect.Parameters["World"].SetValue(absoluteBoneTransforms[mesh.ParentBone.Index] * modelTransform);
effect.Parameters["View"].SetValue(camera.view);
effect.Parameters["Projection"].SetValue(camera.projection);
effect.Parameters["CameraPos"].SetValue(camera.cameraPosition);
Vector3 lookAt = camera.cameraPosition + camera.cameraDirection;
effect.Parameters["LightPosition"].SetValue(new Vector3(lookAt.X, 1.0f, lookAt.Z - 5.0f));
effect.Parameters["LightDiffuseColor"].SetValue(new Vector3(0.45f, 0.45f, 0.45f));
effect.Parameters["LightSpecularColor"].SetValue(new Vector3(0.45f, 0.45f, 0.45f));
effect.Parameters["LightDistanceSquared"].SetValue(40.0f);
effect.Parameters["DiffuseColor"].SetValue(color);
effect.Parameters["AmbientLightColor"].SetValue(Color.Black.ToVector3());
effect.Parameters["EmissiveColor"].SetValue(Color.White.ToVector3());
effect.Parameters["SpecularColor"].SetValue(Color.White.ToVector3());
effect.Parameters["SpecularPower"].SetValue(10.0f);
if (texture != null)
{
effect.Parameters["DiffuseTexture"].SetValue(texture);
}
mesh.Draw();
}
}
pass.Apply();
}
}
It seems that the normals were the villain this time around. After correcting the normals in Blender, everything seems to work now.
I want to thank meds and Andrew Russell. Without your help I wouldn't have figured it out!
So now I know, when you have problems with your lighting, always check the normals first.
Related
Say I have a game scene, CarLevelScene which is a level where I have to drive a car a certain distance. I have a "distance remaining" gauge in one corner of the scene and a "fuel remaining" gauge in another corner. In favor of single responsibility, I want to extract this from the CarLevelScene js file. In addition, on update, both of these gauges depend on some shared data (since fuel depends on distance traveled).
Right now, I essentially do something like this:
let fuelGuageState;
let distanceGuageState;
export default class CarLevelScene {
init() {
fuelGuageState = new FuelGuageState(this); // have to pass in this to create game objects in the scene. Or do I?
distanceGuageState = new DistanceGuageState(this);
}
create() {
fuelGuageState.create();
distanceGuageState.create();
}
update() {
...
let distance = someCalc(...); // possibly from some distance model object
let fuel = someOtherCalc(distance, ...);
fuelGaugeState.update(fuel);
distanceGaugeState.update(distance);
}
}
My question is, is this the conventional way to create and use components in Phaser 3? Essentially, it would make the scene an organizer of components that simply feed the correct data to each update method. It feels a bit smelly. There's temporal coupling and a lot of dependency management.
Are there official docs that deal with the appropriate organization of components in code?
Does anyone know if ReferenceIntersector works with TopografySurfaces? Cannot make it work. I need to find a point on the surface based on a intersection with a line.
Did you solve it? If not, i gave it a try and for me this Code here works fine:
public XYZ ProjectPointOnTopographySurface(XYZ point, int direction)
{
// For getting the 3D view
View3D view3D = new FilteredElementCollector(Document)
.OfClass(typeof(View3D))
.Cast<View3D>()
.Where(v => v.Name == "{3D}")
.FirstOrDefault();
XYZ vectorDirection = new XYZ(0, 0, direction);
ElementClassFilter intersectionFilter = new ElementClassFilter(typeof(TopographySurface));
ReferenceIntersector referenceIntersector = new ReferenceIntersector(intersectionFilter, FindReferenceTarget.All, view3D);
ReferenceWithContext referenceWithContext = referenceIntersector.FindNearest(point, vectorDirection);
return referenceWithContext.GetReference().GlobalPoint;
}
Regardless of whether the ReferenceIntersector does or does not work with topography surfaces, you can pretty easily solve the problem you describe yourself using other means. Simply ask the surface for its tessellated representation. That will return a bunch of triangles. Then, implement your own algorithm to intersect a triangle with the line. That should give you all you need, really.
I've been struggling with this issue off and on for the better part of a year.
As the title says, i wish to dimension from one side of a wall, to both sides of openings (door openings), then terminate at the other end of the wall (vertically and horizontally). I also wish to dimension to all families hosted in the wall, but i have been able to accomplish this using ScottWilson's voodo magic helper class. Found Here: http://thebuildingcoder.typepad.com/blog/2016/04/stable-reference-string-magic-voodoo.html
foreach (ElementId ele in selSet) {
FamilyInstance fi = doc.GetElement(ele) as FamilyInstance;
Reference reference = ScottWilsonVoodooMagic.GetSpecialFamilyReference(fi,ScottWilsonVoodooMagic.SpecialReferenceType.CenterLR,doc);
refs.Append(reference);
pts[i] = ( fi.Location as LocationPoint ).Point;
i++;
}
XYZ offset = new XYZ(0,0,4);
Line line = Line.CreateBound( pts[0]+offset, pts[selSet.Count - 1]+offset );
using( Transaction t = new Transaction( doc ) )
{
t.Start( "dimension embeds" );
Dimension dim = doc.Create.NewDimension(doc.ActiveView, line, refs );
t.Commit();
}
The problem lies in determining the appropriate stable references to the wall faces. I am able to find all faces on a wall, but this gives me 100+ faces to sort through.
If anyone can help me it would be greatly appreciated!
Side note: The closest of gotten is using a casting a ray trace through my panel, then using a reference intersector method to determine references. But i'm really struggling with implementation: http://thebuildingcoder.typepad.com/blog/2015/12/retrieving-wall-openings-and-sorting-points.html
These two posts should provide more than enough to solve all your issues:
Dimension walls by iterating their faces
Dimension walls by shooting a ray
Basically, you need to obtain references to the faces or edges that you wish to attach the dimensioning to. These references can be obtained in several ways. Two common and easy approaches are:
Retrieve the element geometry using the option ComputeReferences set to true and extract the specific face required.
Shoot a ray through the model to determine the required element and its face using the 2017
ReferenceIntersector Class.
I am trying to find my way using SVGKit (https://github.com/SVGKit/SVGKit) for an iOS project dealing with geographical maps.
At this point, I can access a particular area on a map using a CALayer object. That lets me access the rectangle surrounding the area.
Here is the code I use for this:
CALayer *layer=[svgView.document layerWithIdentifier:#"myLayerID"];
[layer setBackgroundColor:[UIColor orangeColor].CGColor];
if( [layer isKindOfClass:[CAShapeLayer class]] )
{
CAShapeLayer* shapeLayer = (CAShapeLayer*) layer;
NSLog(#"That is good so far!");
layer.mask=shapeLayer;
}
But I need to access the precise area of the map; not only the surrounding rectangle, in order to highlight it.
I have kind of read I should use the CGPathRef and a mask.
How exactly can I do this?
Thanks for any tip.
When you find the CALayer, cast it to a CAShapeLayer (if you can; if you have the right layer, this should work fine).
if( [layer isKindOfClass:[CAShapeLayer class]] )
{
CAShapeLayer* shapeLayer = (CAShapeLayer*) layer;
// Now you have access to lots more Apple methods
}
Then you can chnage the line width, fill color, etc - all sorts of funky stuff.
Also look into CALayer.shadow* - various features from Apple there that will automatically hilight the visible parts of a layer.
I have an view that have mutiples views inside it, and an image presentation (aka. 'cover flow') into that too... And I need to make a screenshot programatically !
Since docs says that "renderInContext:" will not render 3d animations :
"Important The Mac OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of Mac OS X may add support for rendering these layers and properties."
source: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html
I have searched a lot, and my 'best' solution (that is not good at all), is to create my own CGContext and record all CG animations into it. But I really do not want to do it, because I will need to re-write most of my animations codes and it will be very expensive for memory... I found other solutions (some of then unmakable) as use openGL or capture through AVSessions, but no one that can help me...
What are my options ? Any with that problem ?
Thanks for your time !
have you actually tried it? I'm currently working on a project with several 3D transforms, and when I try to programmatically make this screenshot it works just fine :)
Here is the code I use:
-(UIImage *)getScreenshot
{
CGFloat scale = 1.0;
if([[UIScreen mainScreen]respondsToSelector:#selector(scale)])
{
CGFloat tmp = [[UIScreen mainScreen]scale];
if (tmp > 1.5)
{
scale = 2.0;
}
}
if(scale > 1.5)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
}
else
{
UIGraphicsBeginImageContext(self.frame.size);
}
//SELF HERE IS A UIVIEW
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
I got it working with protocols.... I'm implementing a protocol in all UIViews classes that make 3D transforms. So when I request a screenshot, it make all subviews screenshot, and generate one UIImage.. Not so good for lots of views, but I'm doing in a few views.
#pragma mark - Protocol implementation 'TDITransitionCustomTransform'
//Conforms to "TDITransitionCustomTransform" protocol, return currrent image view state , by current layer
- (UIImage*)imageForCurrentState {
//Make print
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return printed image
return screenShot;
}
I was thinking it may works now because I'm doing that render in the transformed view layer, which have being transformed it self...
And it wasn't working because "renderInContext:" doesn't get layers of it subviews, may it possible ?
Anyone interest in a bit more code of this solution, can be found here . in the apple dev forum.
It may be a function bug, or it just not being design for this purpose ...
May Be You can use Core Graphaic instead of CATransform3DMakeRotation :)
CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);
Which get effet on the renderInContext