background tileSprite is over sized - phaser-framework

I'm creating a simple platform game with the phaser framework. I've added to background images as tileSprites to provide a parallax background. They are both showing in game, however they are over sized, roughly double the height I expect.
Sprite asset size - 2048 x 1024
Game world size - 2400 x 1024
Adding the tileSprite
preload: function () {
this.load.image('far-background', 'assets/far-background.png');
this.load.image('near-background', 'assets/near-background.png');
},
create: function () {
this.farBackground = this.add.tileSprite(0,0, 2400, 1024, 'far-background');
this.nearBackground = this.add.tileSprite(0,0, 2400, 1024, 'near-background');
}
How do I make the tilesprites fill the canvas vertically?
Full code can be found here if needed.
Any help much appreciated.

If you have (tile)sprites in the game that are too big, you can try using the scale property to manipulate their size: http://phaser.io/docs/2.2.2/Phaser.TileSprite.html#scale
Here is an example of using the scale property to manipulate a sprite's size: http://phaser.io/examples/v2/sprites/scale-a-sprite
Hope this helps :)

Related

How can I draw multiple(10000) cubes in the bevy game engine for my 3D voxel game?

When i create a 100 x 100 chunk of cubes in bevy it is only able to maintain like 10 fps.
Even if i replace the cubes with something more simple like planes i dont get any better performance out of it.
I benchmarked it with mangohud and it says, that my cpu and gpu are only sitting at about 20% usage.
Here is the code I use to generate a 32 x 32 chunk with OpenSimplex noise
commands: &mut Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
asset_server: Res<AssetServer>,
seed: Res<Seed>,
) {
let noise = OpenSimplex::new();
commands
.spawn(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Plane{ size: 1.0 })),
material: materials.add(Color::rgb(0.5, 0.5, 1.0).into()),
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 0.0)),
..Default::default()
})
.with(Chunk)
.with_children(|parent| {
let texture_handle = asset_server.load("textures/dirt.png");
for x in -32 .. 32 {
for z in -32 .. 32 {
let y = (noise.get([
( x as f32 / 20. ) as f64,
( z as f32 / 20. ) as f64,
seed.value,
]) * 15. + 16.0) as u32;
parent
.spawn(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Cube{ size: 1.0 })),
material: materials.add(StandardMaterial { albedo: Color::rgba(1.0, 1.0, 1.0, 1.0), albedo_texture: Some(texture_handle.clone()), ..Default::default() }),
transform: Transform::from_translation(Vec3::new(x as f32, y as f32, z as f32)),
..Default::default()
})
.with(Cube);
}
}
});
}
But 32 x 32 is the absolute maximum for a playable experience.
What do I have to do, to be able to draw multiple chunks at the same time?
System specs:
cpu: Intel Core i7-6820HQ CPU # 2.70GHz
igpu:Intel HD Graphics 530
dgpu: Nvidia Quadro M2000M
But when offloading to the more powerfull dgpu I dont get any better performance.
Some optimizations that are immediately visible:
Convert your nested for-loop algorithm into a single for-loop.
It's more cache friendly.
Use math to split the now-single index into x/y/z values to determine position.
Hidden surface removal.
During mesh creation, instead of creating a whole new cube to add to the mesh (6 faces, 12 triangles, 24 verts) only add the faces (2 triangles) to the mesh that are actually visible. I.e. those that do not have a neighboring opaque (not air) block in that direction.
Use Indexed drawing instead of Vertex-based
Use a TextureAtlas.
Use one big texture for every every cube instead of a single texture per cube.
It is actually the engines fault, but it will improve with the 0.5.0 release.
Use meshing algorithms, like greedy meshing.
block-mesh is a good rust library that already has a couple of algorithms integrated for voxels.
also see vx-bevy to have an idea about voxel engine creation.

Direct3D Window->Bounds.Width/Height differs from real resolution

I noticed a strange behaviour with Direct3D while doing this tutorial.
The dimensions I am getting from the Window Object differ from the configured resolution of windows. There I set 1920*1080, the width and height from the Winows Object is 1371*771.
CoreWindow^ Window = CoreWindow::GetForCurrentThread();
// set the viewport
D3D11_VIEWPORT viewport = { 0 };
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = Window->Bounds.Width; //should be 1920, actually is 1371
viewport.Height = Window->Bounds.Height; //should be 1080, actually is 771
I am developing on an Alienware 14, maybe this causes this problem, but I could not find any answers, yet.
CoreWindow sizes, pointer locations, etc. are not expressed in pixels. They are expressed in Device Independent Pixels (DIPS). To convert to/from pixels you need to use the Dots Per Inch (DPI) value.
inline int ConvertDipsToPixels(float dips) const
{
return int(dips * m_DPI / 96.f + 0.5f);
}
inline float ConvertPixelsToDips(int pixels) const
{
return (float(pixels) * 96.f / m_DPI);
}
m_DPI comes from DisplayInformation::GetForCurrentView()->LogicalDpi and you get the DpiChanged event when and if it changes.
See DPI and Device-Independent Pixels for more details.
You should take a look at the Direct3D UWP Game templates on GitHub, and check out how this is handled in Main.cpp.

unity3d: Use main camera's depth buffer for rendering another camera view

After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer.
My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn.
I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.
Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.
I didn't get any reply to my earlier question; hence, the repost.
Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.
public class MagicRenderer : MonoBehaviour {
public Shader particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat; // material that uses a simple blend shader
public int downSampleFactor = 1;
private RenderTexture particleRT;
private static GameObject pCam;
void Awake () {
// make the main cameras depth buffer available to the shaders via _CameraDepthTexture
camera.depthTextureMode = DepthTextureMode.Depth;
}
// Update is called once per frame
void Update () {
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// create tmp RT
particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
particleRT.antiAliasing = 1;
// create particle cam
Camera pCam = GetPCam ();
pCam.CopyFrom (camera);
pCam.clearFlags = CameraClearFlags.SolidColor;
pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.useOcclusionCulling = false;
pCam.targetTexture = particleRT;
pCam.depth = 0;
// Draw to particleRT's colorBuffer using mainCam's depth buffer
// ?? - how do i transfer this camera's depth buffer to pCam?
pCam.Render ();
// pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.
// blend mainCam's colorBuffer with particleRT's colorBuffer
// Graphics.Blit(pCam.targetTexture, src, blendMat);
// copy resulting buffer to destination
Graphics.Blit (pCam.targetTexture, dest);
// clean up
RenderTexture.ReleaseTemporary(particleRT);
}
static public Camera GetPCam() {
if (!pCam) {
GameObject oldpcam = GameObject.Find("pCam");
Debug.Log (oldpcam);
if (oldpcam) Destroy(oldpcam);
pCam = new GameObject("pCam");
pCam.AddComponent<Camera>();
pCam.camera.enabled = false;
pCam.hideFlags = HideFlags.DontSave;
}
return pCam.camera;
}
}
I've a few additional questions:
1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called:
(i) Z-PrePass, that only writes to the depth buffer
(ii) Color pass, that writes to both the color and depth buffer.
2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;
and rendered them using pCam.Render().
I thought this would render the particles using their existing shaders with the ZTest.
Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).
Why does this happen?
It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.
Copying the depth
First, you have to get the scene depth in the resolution of your smaller render texture.
This can be done by creating a new render texture with the color format "depth".
To write the scene depth to the low resolution depth, create a shader that just outputs the depth:
struct fragOut{
float depth : DEPTH;
};
sampler2D _LastCameraDepthTexture;
fragOut frag (v2f i){
fragOut tOut;
tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
return tOut;
}
_LastCameraDepthTexture is automatically filled by Unity, but there is a downside.
It only comes for free if the main camera renders with deferred rendering.
For forward shading, Unity seems to render the scene again just for the depth texture.
Check the frame debugger.
Then, add a post processing effect to the main camera that executes the shader:
protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
Graphics.Blit(pFrom, pTo);
}
You can probably do this without the second blit, but it was easier for me for testing.
Using the copied depth for rendering
To use the new depth texture for your second camera, call
mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);
Keep targetTexture empty.
You then must ensure the second camera does not clear the depth, only the color.
For this, disable clear on the second camera completely and clear manually like this
Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);
I recommend to also render the second camera by hand. Disable it and call
mSecondCamera.Render();
after clearing.
Merging
Now you have to merge the main view and the seperate layer.
Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.
To mix this with the rest, use a post processing step on the main camera with
fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);
//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;
Additive materials work out of the box, but alpha blended do not.
You have to create a shader with custom blending to create working alpha blended materials. The blending is
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
This changes how the alpha channel is modified for every performed blending.
Results
add blended in front of alpha blended
fx layer rgb
fx layer alpha
alpha blended in front of add blended
fx layer rgb
fx layer alpha
I did not test yet if the performance actually increases.
If anyone has a simpler solution, let me know please.
I managed to reuse camera Z-buffer "manually" in the shader used for rendering. See http://forum.unity3d.com/threads/reuse-depth-buffer-of-main-camera.280460/ for more.
Just alter the particle shader you use already for particle rendering.

How do you rotate SVG images in processing.js

I am just staring with processing.js and I have been having trouble because every time I rotate an image it also changes its location on the screen. So what processing seems to do is, rotate my image around the point I told it to place it, instead of rotating it first around its own axis and then placing it where I told it to (which I figured cannot be done in that way/order).
This is the code
PShape s;
float angle = 0.1; //rads
s = loadShape("sensor.svg");
s.rotate(angle);
//I change this angle manually or with my clickMouse function which isnt shown.
void setup(){
size(400,350);
frameRate(30); //30 frames per seconds
}
void draw(){ //shape( shape, x, y, width, height);
smooth();
fill(153);
ellipse(200, 350/2, 100, 100);
shape(s, 200, 350/2, 20, 20);
ellipse(200, 350/2, 2, 2);
}
What I am basically trying to do is make this "sensor" image rotate in the correct orientation around the circle (ellipse) that I drew. Thats the idea. Its doing neither. Maybe having a click function that rotates the SVG image around the circle. But instead it rotates around the coordinates of the shape(image, x_coord, y_coord, width, height) function. If anyone has any suggestions, I would be so happy! Hope my question makes sense, if it doesnt I would be more than happy to clarify any part of it.
Thanks! :)
It's much easier not to rotate your shape, but to rotate the coordinate system.
void draw() {
translate(s.width/2,s.height/2);
rotate(PI/4);
shape(s);
resetMatrix();
// keep on drawing here
}
This first moves the coordinate system so that (0,0) is on top of the center of your shape, then rotates the entire coordinate system by 45 degrees, then draws your shape. Then you reset the coordinate system and keep drawing as usual.

Scaling SVG objects defined in pixels to physical units?

I have an SVG element defined with a width and height of "297mm" and "210mm", and no viewbox. I do this because I want to work inside an A4 viewport, but I want to create elements in pixels.
At some point, I need to scale up an object defined in pixels so that it fits into part of the A4 page. For example, I might have an object which is 100 px wide, that I want to scale to 30mm horizontally.
Is this possible? I think I need a second container with a new co-ordinate system, but I can't find any way to do this.
EDIT
It seems that I (or SVG) has misunderstood what a pixel is. I realised that I could size a line to 100%, and then get it's pixel size with getBBox to find the scaling required. I wrote this code and ran it on 2 clients, one with a 1280x1024 monitor (80dpi), and one with a 1680x1050 LCD (90dpi):
function getDotsPerInch() {
var hgroup, vgroup, hdpi, vdpi;
hgroup = svgDocument.createElementNS(svgNS, "g");
vgroup = svgDocument.createElementNS(svgNS, "g");
svgRoot.appendChild(hgroup);
svgRoot.appendChild(vgroup);
drawLine(hgroup, 0, 100, "100%", 100, "#202020", 1, 1);
drawLine(vgroup, 100, 0, 100, "100%", "#202020", 1, 1);
hdpi = hgroup.getBBox().width * 25.4 / WIDTH;
vdpi = vgroup.getBBox().height * 25.4 / HEIGHT;
drawText(hgroup, "DPI, horizontal: " + hdpi.toFixed(2), 100, 100);
drawText(hgroup, "DPI, vertical: " + vdpi.toFixed(2), 100, 120);
}
IE9, FF, Opera, and Chrome all agree that both monitors are 96 dpi horizontally and vertically (although Opera's slightly inaccurate), and Safari reports 0 dpi on both monitors. So svg just appears to have defined "pixels" as "96dpi". some quick Googling appears to confirm this, though I haven't found anything definitive, and most hits give 90dpi, with 96dpi as the FF variant.
You can nest <svg> elements and use the viewBox attribute on the child element to get a new coordinate system. Give the child <svg> element x, y, width and height attributes of where you want it to appear on the parent in the parent's co-ordinate system.

Resources