I'm doing a Phaser project for work and I've adopted the current code base.
I have a bit of an odd situation.
The game width is 1024 by 748. The world bounds are set by
game.world.setBounds(-1024,0,3072,748)
When adding sprites, the code base I've adopted has
game.add.sprite(2424,0,'name of atlas','name of png')
Based on this it appears the previous developer was creating these sprites off screen.
When I adopted this it was using Phaser 2.1. I upgraded to Phaser 2.2.1 because it had an Internet Explorer Fix I needed.
Now however, half the time the sprites will be created on the screen and the other half of the time the sprites are created off screen.
What was changed between the versions to create this type of issue?
Also note that, if I add a sprite with x coordinate is between 0 and 1024, I indeed see this sprite moving from the left to the right of the screen.
From 1024 to roughly 2200 I don't see the sprite at all.
However adding the sprite with an x coordinate from 2200 to roughly 2800 has the odd effect of starting at the far right of the screen and moving left. That means, when I use 2200 the sprite is at the edge of my 1024 game screen. When I use 2800 it is at the edge of my left game screen.
This is counter intuitive to me because as X goes up the sprite should move farther away from the left.
Anyone know why or what is happening here?
Related
How is it possible object can pass through ring spirte like in the image below?
Please can you help me I have no idea how can i do that.
I think you posted a incorrect image. To get the image you posted you just have to draw the red bar on top of the black ring.
I guess you want the ring on the left side to be on top and the right side to be over so it visually goes through. Well this is simply not so easy in 2D since draw order.
I have a couple of suggestion you can explore.
Always draw the ring on top of the bar but when a collision is happening you calculate where the bar overlaps and don't draw the pixels in that place. You can use a Pixmap for calculations like this. Depending on the size of your images this could be very expensive to calculate each frame.
A faster but slightly more hacky way could be to split red bar in multiple images and if a certain part of it should be overlapped by the ring draw it first otherwise draw it after the ring. Depending on what the red bar is going to look in your end product and how much possible angles the bar could have I can imagine this can be very tricky to get right.
Use 3D for this. You could have a billboard with a slight angle for the ring and have the bar locked on the distance axis at the rings center. However, on certain angles of entrance and exit you will get Z fighting since the pixels will be at the same distance from the camera. This might or might not be noticable and I have no idea how LibGDX would handle Z fighting.
I wanna add this solution :
if the object gonna pass through the ring horizontally i propose to devise sprite ring in to to sprite (sprite 1 & sprite 2)
you just have to draw sprites in that order :
Sprite1
Sprite Object
Sprite2
You can do the same if the object is gonna pass through ring vertically
PS : this solution don't work if the object is going to passs through ring both Vertically and Horizontally
Hope this was helpfull
Good luck
I'm starting to develop a poc with the main features of a turn-based RPG similar to Breath of Fire 4, a mixture of 3D environment with characters and items such as billboards.
I'm using an orthographic camera with an angle of 30 degrees on the X axis, I did my sprite to act as a billboard with the pivot in the center, the problem occurs when the sprite is nearing a 3D object such as a wall.
Check out the image:
I had tried the solution leaving the rotation matrix of the billboard "upright", worked well, but of course, depending on the height and angle of the camera toward the billboard it gets kinda flattened, I also changed the pivot to the bottom of the sprite but this problem appears with objects in front of the sprite too. I was thinking that the solution would be to create a fragment shader that relies on the depth texture of some previous pass, I tried to think in how to do it with shaders but I could not figure it out. Could you help me with some article or anything that puts me in the right direction? Thank you.
See what I am trying to achieve on this video.
You had got the right approach. Use the upright matrix, and scale up Z of billboards preparing flattened Z by your camera. The Z scaling should be about 1.1547. It is (1 / cos30), which makes billboards look like original size from the camera with the angle of 30 degrees. It seems a tricky way but developers of BoF4 on the video might use the same solution too.
This question is actually for Unity3D, but it can also be a more general question, so therefore I'm going to make this question as general possible.
Suppose I have a scene with a camera (near = 0.3, far = 1000, fov = 60) and I want to draw a skydome that is 10000 units in radius.
The object is not culled by the frustum of the camera, because I'm inside of the dome. But the vertices are culled by some shader somehow and the end-result looks like this:
Now my question is:
what setting for any engine can I change to make sure that the complete object is drawn and not clipped by the far plane of the camera?
What I don't want is:
Change the far plane to 10000, because it makes the frustum less accurate
Change the near plane, because my game is actually on a very low scale
Change the scale of the dome, because this setting looks very realistic
I do not know how to do this in Unity but in DirectX and in OpenGL you switch off the zbuffer (both checks and writing) and draw the skybox first.
Then you switch on the zbuffer and draw the rest of the scene.
My guess is that Unity can do all this for you.
I have two solutions for my own problem. The first one doesn't solve everything. The second does, but is against my own design principles.
There was no possibility for me to change the shader's z-writing, which is a great solution from #Erno, because the shaders used are 3rd party.
Option 1
Just before the object is rendered, set the far plane to 100,000 and set it back to 1000 after drawing the sky.
Problem: The depth buffer is still filled with values between very low and 100,000. This decreases the accuracy of the depth buffer and gives problems with z-fighting and post-effects that depend on the depth buffer.
Option 2
Create two cameras that are linked to each other. Camera 1 renders the skydome first with a setting of far = 100000, near = 100. Camera 2 clears the depth buffer and draws the rest of the scene with a setting of far = 1000, near = 0.3. The depth buffer doesn't contain big values now, so that solves the problems of inaccurate depth buffers.
Problem: The cameras have to be linked by some polling system, because there are no change events on the camera class (e.g. when FoV changes). I like the fact that there is only one camera, but this doesn't seem possible quite easily.
I have made 3 planes and positioned them in a way that they make a corner of cube. (For some reasons I don't want to make a cube object). The 3 planes have 3 different Texture2Ds with different images. The strange problem is when I render the 3 objects and start rotating the camera, in some perspectives some parts of these 3 planes don't get rendered. For example when I look straight at the corner a hole is created which is shaped as a triangle. This is the image of the problem in a netbeans emulator:
alt text http://www.pegahan.com/m3g.jpg
I put the red lines there so you can see the cube better.
The other strange thing is that the problem resolves when I set the scale of the objects to 0.5 or less.
By the way the camera is in its default position and the cube's center is at (0,0,0) and each plane has a width and height of 2.
Does anyone have any ideas why these objects have a conflict with each other and how could I resolve this problem.
Thanks in advance
looks like classic case of "box bigger then camera far clipping plane" error :)
since I don't know anything about m3g I can just point you to google that.
I'm working on a UI which needs to work in different aspect ratios, 16:9, 16:10, 4:3
The idea is conceptually simple: Everything is centered to the screen in a rough 4:3 area and anything outside this portion of screen has basic artwork, so something like this:
(not drawn to scale)
Where the pink area represents whre all the UI objects are positioned and the blue area is just background and effects.
The trick is in usability, if I pass in coordinates (0,0) in a 4:3 aspect ratio environment (0,0) would be the top left of the screen. However if I'm in a 16:9 environment (0,0) needs to get renormalized based on the new aspect ratio for it to be in the appropriate place. So my question is: How can I achieve this?
edit: for clarification this is basically for a UI system and while I listed the ratios above as 4:3, 16:9, 16:10 it should be able to dynamically adjust values for whatever aspect ratio it is set to.
edit 2: Just to add more details to the situation: When the positions fo rsetting are passed in they are passed in as a % of the screens current widht height, so basically setting position x would be: [pos x as portion of screen]*SCREEN_WIDTH where screen width is the width of the current screen itself.
The obvious answer seems to be an offset. Since 4x3 is 16x9, it appears you want a 16x9 screen to have 2x9 bands to the left and the right. Hence, the X offset should be (2/16) * width.
For 16x10 screens, the factor is slightly more complicated: 4x3 is 13.33x10, so you have edges of width 1.67, and the X offset should be (1.67/16) * width = (5/48)* width.
So ... Can't you just come up with an abstraction layer, that hides the differences? One idea could be to model a "border" around the active area, that gets added. For 4:3 displays, set the border size to 0 to make the active area cover the full screen.