I have a PixiJS canvas rendered with WebGLRenderer and I need to find a way to make a decent looking snap of one of my containers with Graphics elements in it. Problem is, rendering to texture and cacheAsBitmap don't use antialiasing in this render mode.
Is there a way to create a CanvasRenderer and use it to render my container into texture? It seems like active WebGLRenderer does not allow anything else: there is no error thrown, but rendered texture have only eternal darkness in it. So maybe there is a way to disable it for a moment and then turn it on back again?
There are different other ways to get almost what I want. Using renderer.generateTexture with resolution set to 2 helps a bit. Finally, I can create really large clone of my container, make some snaps with generateTexture and then scale them down. But still it looks a bit odd. And feels like a dirty way to do things too, especially since cloning a container with a lot of content can be a bit problematic in some cases.
Related
I have been looking at Epic Games' Fortnites Website and I am trying to figure out how they achieved the 3D effect on the page:
Epic Games' Fortnite website - scrolled down to 3rd slide
Does any one have any idea how to do it? I would really like something similar to a project I'm working on. I have found Three.js, but I am quite sure that is not the solution they went with.
For these types of questions, i can only recommend to install spector.js and have a look yourself. In short: everything you see is 100% faked.
I mean, that's always the case. In fact, if you want to build something like that, your first question should always be: how much of this can I fake and still get away with that?
In this example, it turns out: everything. Just open the devtools and click through all the assets in the network-tab. You will find these two textures:
looks familiar, right?
So what they appear to be doing is they are using three.js with some custom shaders to handle the translations, the flickering of the lights and the highlighting. These effects are computed using the normal-map and an additional mask-texture which I couldn't quite figure out what it does. But again, if you look at the scene in spector.js you can see the shaders used for every drawcall.
The only thing that is a bit more complex is the little robot-friend in the bottom left corner. But again, it's not 3d as in meshes and so on but rather a set of flat textured quads running a bones-animation thing.
I think that makes it a really great website after all.
Given that epic is building the unreal-engine I would suspect the original renders were done there. And I agree, the lighting looks really amazing :)
It is a simple parallax effect using animated sprite sheets.
Parallax effect is achieved by using several layers of images/video on top of one another in different Z-depth.
You can achieve the moving part by using the mousemove event to track the cursor.
In Flixel, you cannot add a FlxSprite to another one, like you could with Flash API (Sprites were subclasses of DisplayObject). So if you want 2 Sprites to behave like parent and child, you still have to animate them separately. This can become a nightmare if you use Tweens too.
For example, imagine a rotating spaceship that has attached thrusters, or a moving character that wears an armor, hat, shield etc.
Is there a way to have a 'child' Sprite acting like it was added into a 'parent' one, so that it updates automatically its position, scale and rotation accordingly? For example, during their FlxGroup's update() function?
I'm interested in HaxeFlixel 3.3.1, although it doesn't really matter, as this applies for all versions and ports of Flixel.
Edit: I noticed that HaxeFlixel features FlxSpriteGroup, which is supposed to handle the multiple Sprites. But this is a new feature, and I'm pretty sure that flixel developers are using different approaches for the rest of the flixel ports.
There's a limited version of this available using: http://api.haxeflixel.com/flixel/addons/display/FlxNestedSprite.html
HaxeFlixel provides FlxSpriteGroup and FlxNestedSprite which both can be used to have some sprites behaving as a group. In both approaches, I had a problem updating the angle and scale properties, however updating the position seems to work fine.
If you are not interested to animate the sprites separately, and painting one sprite over another is what you're after, FlxSprite's stamp() function could be what you need (e.g. draw a helmet over your character)
I have a Network Diagram(Nodes and Edges) in SVG generated by GraphViz tool. I want to make the diagram interactive in the sense, it should be draggable, on click of a node some other nodes to be hidden.ETC. Can any one suggest snapSVG is sufficient for that? I cannot add any thing to the SVG diagram that is my restriction. How can we make existing svg diagrams forced directed? Any help, starting point, fiddle will be helpful. I have hands on in d3.js is it achievable by d3.js?
This should be possible, maybe be aware Snap.svg isn't so compatible with older browsers (in which case you could look at Raphael which is Snaps older brother, d3 is very well established as well). They all have all the basics, handlers, animation etc. However, although its possible, it may well be quite a lot of work (so may want to stick with what you know).
You may want to look into whether you want to autocreate connecting elements and move automatically, or are happy moving endpoints manually.
It may also depend on things like what you want to happen after dragging the elements. Do you want to save them ? Some of that may be quite important as to how you approach it, maybe more important than dragging as most will support that.
I have been tearing my hair out for a while over this. I need an OpenGL 3.2 Core (no deprecated stuff!) way to efficiently render many sprites, using batching (no instancing).
I've seen examples that do this with geometry alone, but mine also needs to send textures to it, I don't know how to do this.
I need a well done example of it working in action. And looking at how other libs like monogame and such do it isn't much help, because all I'm interested in is the GL code, and it has to have no deprecated stuff in it.
Basically I want to be able to efficiently render thousands+ of sprites, all having textures. The texture is just a spritesheet, so I just need to tell it to render a region of that spritesheet.
I'm disappointed in the amount of material available for programmable pipeline. To the point where it seems like it'd be so much easier to just say screw it and use fixed pipeline, even though I definitely don't want to do that.
So yeah, any full examples that do what I want? Or could somebody more knowledgable write one up? :)
A lot of the examples are "oh, here's how you render 1 triangle". Well that's great, except nobody needs to render only 1 triangle/quad. And they need to be textured in addition to that!
An example that uses VBOs/VAOs/EBOs
ALSO: this means the code can't use glTexPointer and that stuff, but just in raw VBOs/VAOs...
I saw this question and decided to write a little program that does some "sprite" rendering using points and gl_PointSize. I'm not quite sure what you mean by "batching" as opposed to "instancing," but my program uses the glDrawArraysInstanced() call so that I can render multiple points without needing my VBO to be variable sized. My code also doesn't texture the sprites, but that's easy enough to add in (upload the active texture index (the one that was active during your call to glTexSubImage), to a GLSL sampler2D using glUniform1i).
Anyway, here's the program I wrote: http://litherum.blogspot.com/2013/02/sprites-in-opengl-programmable-pipeline.html Hope you can learn from it!
Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.